Browsing by Department "Department of Computer Science"
Now showing 1 - 20 of 437
Results Per Page
Sort Options
- ItemOpen Access3D Scan Campaign Classification with Representative Training Scan Selection(2019) Pocock, Christopher; Marais, PatrickPoint cloud classification has been shown to effectively classify points in 3D scans, and can accelerate manual tasks like the removal of unwanted points from cultural heritage scans. However, a classifier’s performance depends on which classifier and feature set is used, and choosing these is difficult since previous approaches may not generalise to new domains. Furthermore, when choosing training scans for campaign-based classification, it is important to identify a descriptive set of scans that represent the rest of the campaign. However, this task is increasingly onerous for large and diverse campaigns, and randomly selecting scans does not guarantee a descriptive training set. To address these challenges, a framework including three classifiers (Random Forest (RF), Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP)) and various point features and feature selection methods was developed. The framework also includes a proposed automatic representative scan selection method, which uses segmentation and clustering to identify balanced, similar or distinct training scans. The framework was evaluated on four labelled datasets, including two cultural heritage campaigns, to compare the speed and accuracy of the implemented classifiers and feature sets, and to determine if the proposed selection method identifies scans that yield a more accurate classifier than random selection. It was found that the RF, paired with a complete multi-scale feature set including covariance, geometric and height-based features, consistently achieved the highest overall accuracy on the four datasets. However, the other classifiers and reduced sets of selected features achieved similar accuracy and, in some cases, greatly reduced training and prediction times. It was also found that the proposed training scan selection method can, on particularly diverse campaigns, yield a more accurate classifier than random selection. However, for homogeneous campaigns where variations to the training set have limited impact, the method is less applicable. Furthermore, it is dependent on segmentation and clustering output, which require campaign-specific parameter tuning and may be imprecise.
- ItemOpen AccessA Balanced Approach to IT Project Management(2007) Smith, Derek; Brock, Susan; Hendricks, Danyal; Linnell, StephenThe primary objectives of this study were to identify how IT projects can be managed using the Balanced Scorecard approach. Although the research is positioned to have potential application within international project management discipline, the analysis is limited to a South African project management perspective and only internal aspects of managing projects are considered.
- ItemOpen AccessA bluetooth educational content distribution system modelled on a service-oriented architecture(2008) Bugembe, Kamulegeya Grace; Marsden, GaryIn this research, we design and prototype an educational content distribution system modeled on a Service-Oriented Architecture (SOA) paradigm and implemented using Web services, XML and Bluetooth technology. In the prototype, we use an Open Source Learning Management System (LMS) Sakai implemented in Java and branded Vula for the University of Cape Town (UCT). Web services and its specification of SOAP, XML and Bluetooth technology are used to integrate the disparate technologies that form the service architecture. The disparate technologies include among others Bluetooth enabled mobile phones and PDAs, services (modules) which may be running on different operating systems, and deployed over Local Area Networks (LANs) or Internet. The service is meant to leverage the existing infrastructure to provide a new, cheap channel for education content distribution to mobile devices in learning institutions especially Universities in the developing world and Africa in particular. We design, implement and evaluate the prototype for performance and scalability. During the designing and implementation of the architecture, we incorporate SOA principles of service/module re-use, service composition, loose-coupling, standard data exchange within the system or services, and extensibility of the services among others. The aim of the service is to distribute education content uploaded in Learning Management Systems (LMSs) to Bluetooth enabled mobile devices that are increasingly held by students in developing world Universities. The service is intended to supplement existing Web-based and lecture room content distribution channels by opening up the mobile device space. For the prototype, we focus on repackaging structured text content and distributing it to Bluetooth enabled phones and PDAs using Bluetooth technology. We evaluate our prototype for performance using experimental studies.
- ItemOpen AccessA comparative study of recurrent neural networks and statistical techniques for forecasting the stock prices of JSE-listed securities(2022) Galant, Rushin; Marais, PatrickAs machine learning has developed, the attention of stock price forecasters has slowly shifted from traditional statistical forecasting techniques towards machine learning techniques. This study investigated whether machine learning techniques, in particular, recurrent neural networks, do indeed provide greater forecasting accuracy than traditional statistical techniques on the Johannesburg Securities' Exchanges' top forty stocks. The Johannesburg Securities Exchange represents the largest and most developed stock exchange in Africa, though limited research has been performed on the application of machine learning in forecasting stock prices on this exchange. Simple recurrent neural networks, Gated Recurrent Units and Long-Short Term Memory Units were thoroughly evaluated with a Convolutional Neural Network and a random forest were used as machine learning benchmarks. Historical data was collected for the period 2 January 2019 to 29 May 2020, with the 2019 calendar year being used as the training dataset. Both a train once and a Walkforward configuration were used. The number of input observations utilised were varied from four to fifteen observations whilst making forecasts from one up to ten timesteps into the future. The Mean Percentage Error was utilised to measure forecasting accuracy. Different configurations of the Neural Network models were assessed, including considering whether bidirectionality improved forecasting accuracy. The neural networks were run using two different datasets, the historical stock prices on its own and the historical stock prices with the market index (the JSE All Share Index) to determine whether including the market index improves forecasting accuracy. The study found that bidirectional neural networks provided more accurate forecasts than neural networks that did not incorporate bidirectionality. In particular, the Bidirectional Long Short-Term Memory provided the greatest forecasting accuracy for one step forecast whilst the Bidirectional GRU was more accurate two to eight time steps into the future with the Bidirectional LSTM model being more accurate for nine and ten time steps into the future. However, the classical statistical model, the theta method, significantly outperformed all machine learning models. This is likely the result of the unforeseen impact of the covid-19 pandemic on financial markets that would not have been factored into the training sets of the machine learning algorithms. . . .
- ItemOpen AccessA comparison of a factor-based investment strategy and machine learning for predicting excess returns on the JSE(2018) Drue, Stefan; Moodley, DeshendranThis study investigated the application of Machine Learning to portfolio selection by comparing the application of a Factor Based Investment strategy to one using a Support Vector Machine performing a classification task. The Factor Based Strategy uses regression in order to identify factors correlated to returns, by regressing excess returns against the factor values using historical data from the JSE. A portfolio-sort method is used to construct portfolios. The machine learning model was trained on historical share data from the Johannesburg Stock Exchange. The model was tasked with classifying whether a share over or under performed relative to the market. Shares were ranked according to probability of over-performance and divided into equally weighted quartiles. The excess return of the top and bottom quartiles was used to calculate portfolio payoff, which is the basis for comparison. The experiments were divided into time periods to assess the consistency of the factors over different market conditions. The time periods were defined as pre-financial crisis, during the financial crisis, post financial crisis and over the full period. The study was conducted in the context of the Johannesburg Stock Exchange. Historical data was collected for a 15-year period - from May 2003 to May 2018 - on the constituents of the All Share Index (ALSI). A rolling window methodology was used where the training and testing window was shifted with each iteration over the data. This allowed for a larger number of predictions to be made and for a greater period of comparison with the factorbased strategy. Fourteen factors were used individually as the basis for portfolio construction. While combinations of factors into Quality, Value and Liquidity and Leverage categories was used to investigate the effect of additional inputs into the model. Furthermore, experiments using all factors together were performed. It was found that a single factor FBI can consistently outperform the market, a multi factor FBI also provided consistent excess returns, but the SVM provided consistently larger excess returns with a wide range of factor inputs and beat the FBI in 12 of the 14 different experiments over different time periods.
- ItemOpen AccessA comparison of mobile search interfaces for isiXhosa speakers(2018) Modise, Morebodi; Suleman, HusseinSearch interfaces have for a long time been targeted at the resource-rich languages, such as English. There has been little effort to support African (Bantu) languages in search interfaces when compared to languages such as English, particularly the isiXhosa language. However, due to the increase in use of mobile phones in developing countries, these interfaces can now be adapted to languages in these settings to support information access on the Web. This study proposes mobile search interfaces to support isiXhosa speakers to search for information on the Web using isiXhosa as a discovery language. The isiXhosa language is considered a low-resourced African (Bantu) language spoken in resource-constrained environments in South Africa. The language is spoken by over eight million people. Yet, there has been no search interface specifically targeted at supporting isiXhosa speakers. Two mobile search interfaces were developed on an Android application. The interfaces were text based and voice based. The design of the interfaces was based on feedback from 4 native isiXhosa speakers in a design focus group, and guidelines from the literature. Using the developed interfaces, an experiment was conducted with 34 native isiXhosa speaking students at the University of Cape Town, South Africa. This was done to investigate, which interface could better support isiXhosa speakers to search for information on the Web using mobile phones. Quantitative data was collected using application log files. Additionally, user feedback was then obtained using the standard Software Usability Measurement Inventory (SUMI) instrument, and both interfaces were confirmed as usable. In contrast to what was expected, users preferred the text interface in general, and according to most SUMI subscales. This could be because of greater familiarity with text search interfaces or because of the relative scarcity of voice interfaces in African (Bantu) languages. Where users are not literate, the voice interface may be the only option, so the fact that it was deemed usable is an important independent finding. Search in African (Bantu) language collections is still a largely unexplored field, and more work needs to be done on the interfaces as the algorithms and collections are developed in parallel.
- ItemOpen AccessA complex, high-performance agent-based model used to explore tuberculosis and COVID-19 case-finding interventions(2024) Low, Marcus; Kuttel, MichelleTuberculosis (TB) claimed an estimated 1.5 million lives in 2021 and the COVID-19 pandemic resulted in 14.9 million excess deaths in 2020 and 2021 combined. With both these infectious diseases substantial pathogen transmission takes place before people report to health facilities. Diagnosing more people more quickly and placing them on treatment and/or isolating them is thus critical to achieving epidemiological control. Early diagnosis interventions include contact tracing and isolation, testing of asymptomatic people thought to be at high risk of infection, and in the case of TB, screening using chest X-rays. The impact of several such early diagnosis interventions on case detection have been studied in clinical trials, but the longer-term impact of these interventions on infections (incidence) and deaths (mortality) is not known. There are also unanswered questions as to the impact hypothetical future TB tests, for example allowing for more frequent testing, may have on TB incidence and mortality. We developed an agent-based model (ABM) called ABM Spaces and used it to ask: (i) What is the impact of four different TB case-finding interventions on TB detection rates, incidence and mortality? (ii) What is the impact of test frequency and test sensitivity on tuberculosis incidence? And (iii) What is the impact of contact tracing and isolation and variable test turnaround times on COVID-19 diagnosis and mortality? Such agent-based modelling, in which disease transmission and progression is modelled at the level of discrete individuals, has increasingly been used in recent decades to model infectious disease interventions. Relatively few ABMs in the literature contain substantial social structure (for example associating agents with specific households, workplaces, and school classes). We illustrate that such ABMs with substantial social structure can be developed in a way that is epidemiologically sound and show that this type of ABM is well-suited to the modelling of social interventions such as contact tracing. In the ABM Spaces model we found that testing of people at considerable risk of TB has a greater impact on TB incidence and mortality than mass X-ray screening, that the impact of the two interventions is additive, and that the impact of annual testing of high TB risk individuals is highly sensitive to HIV prevalence. We found that the relationship between test frequency and TB mortality and incidence is non-linear, with an inflection point at around the four-month mark. The COVID-19 version of ABM Spaces confirmed the potential of contact tracing and isolation to reduce incidence and mortality, but the effect was highly sensitive to test turnaround times.
- ItemOpen AccessA component assembly approach to digital library systems(2005) Eyambe, Linda; Suleman, HusseinWith the advent of the Internet came the promise of global information access. In keeping with this promise, Digital Libraries (DLs) began to emerge across the world as a method of providing structured information to their users. These DLs are often created using proprietary monolithic software that is usually difficult to customise and extend. The Open Digital Library (ODL) project was created to demonstrate that DLs can be built as a network of components instead of as monolithic systems. Although the ODL approach has largely been embraced by the DL community, it is not without a few shortcomings. This paper introduces a graphical user interface and its associated framework for creating DLs from distributed components, consequently addressing a number of the limitations of ODL-like systems, as well as presenting a novel and generic approach for creating component-based systems. This system was subject to a user-based evaluation to confirm its utility and provide insights into possible extensions.
- ItemOpen AccessA low cost virtual reality interface for educational games(2022) Sewpersad, Tashiv; Gain, JamesMobile virtual reality has the potential to improve learning experiences by making them more immersive and engaging for students. This type of virtual reality also aims to be more cost effective by using a smartphone to drive the virtual reality experience. One issue with mobile virtual reality is that the screen (i.e. main interface) of the smartphone is occluded by the virtual reality headset. To investigate solutions to this issue, this project details the development and testing of a computer vision based controller that aims to have a cheaper per unit cost when compared to a conventional electronic controller by making use of 3D printing and the built-in camera of a smartphone. Reducing the cost per unit is useful for educational contexts as solutions would need to scale to classrooms sizes. The research question for this project is thus, “can a computer vision based virtual reality controller provide comparable immersion to a conventional electronic controller”. It was found that a computer vision based controller can provide comparable immersion, though it is more challenging to use. This challenge was found to contribute more towards engagement as it did not diminish the performance of users in terms of question scores.
- ItemOpen AccessA mobile application promoting good contact lens practices(2022) Naidoo, Terushka; Berman, SoniaContact lens complications pose an ongoing problem for both optometrists and contact lens wearers. Most of these complications are due to noncompliance to good care practices. Education is the first step to ensuring compliance. If good habits are created on commencement of wear, patients are more likely to continue with these habits or practices. The key however, is maintenance and building on this education, as we cannot expect patients to remember all the information given to them initially. Telemedicine is rapidly becoming a wide reaching and convenient way to provide services and support to patients. The aim of this study was to create a mobile application to provide contact lens wearers with knowledge and assistance to empower them to take good care of their eyes and lenses. A mobile application was built for the study with three main features: a lens change reminder, an information feature, and a diagnosis facility to aid contact lens wearers when they encounter any problems. A PDF version of the application was also created with the latter two features; a secondary aim was to compare its success with that of the mobile application. After receiving ethical clearance for the study, lens wearers who signed the Informed Consent form, were surveyed about their symptoms, knowledge and habits in relation to contact lenses and their eyes. After being divided into two groups, they were either given the mobile application or the PDF document to use. They were subsequently given a second survey to determine if there were any changes to symptoms, habits and knowledge. They were also questioned about the value and effectiveness of the application and the PDF. Although, the results of habit changes were inconclusive, there was a decrease in symptoms after using both the app and the PDF. Both were well received and the majority of participants reported that they would recommended them to others. The mobile application was used more frequently than the PDF, led to a slightly better improvement in knowledge, and scored slightly better in its user evaluation, compared to the PDF.
- ItemOpen AccessA new connectivity strategy for wireless mesh networks using dynamic spectrum access(2021) Maliwatu, Richard; Johnson, David; Densmore, MelissaThe introduction of Dynamic Spectrum Access (DSA) marked an important juncture in the evolution of wireless networks. DSA is a spectrum assignment paradigm where devices are able to make real-time adjustment to their spectrum usage and adapt to changes in their spectral environment to meet performance objectives. DSA allows spectrum to be used more efficiently and may be considered as a viable approach to the ever increasing demand for spectrum in urban areas and the need for coverage extension to unconnected communities. While DSA can be applied to any spectrum band, the initial focus has been in the Ultra-High Frequency (UHF) band traditionally used for television broadcast because the band is lightly occupied and also happens to be ideal spectrum for sparsely populated rural areas. Wireless access in general is said to offer the most hope in extending connectivity to rural and unconnected peri-urban communities. Wireless Mesh Networks (WMN) in particular offer several attractive characteristics such as multi-hopping, ad-hoc networking, capabilities of self-organising and self-healing, hence the focus on WMNs. Motivated by the desire to leverage DSA for mesh networking, this research revisits the aspect of connectivity in WMNs with DSA. The advantages of DSA when combined with mesh networking not only build on the benefits, but also creates additional challenges. The study seeks to address the connectivity challenge across three key dimensions, namely network formation, link metric and multi-link utilisation. To start with, one of the conundrums faced in WMNs with DSA is that the current 802.11s mesh standard provides limited support for DSA, while DSA related standards such as 802.22 provide limited support for mesh networking. This gap in standardisation complicates the integration of DSA in WMNs as several issues are left outside the scope of the applicable standard. This dissertation highlights the inadequacy of the current MAC protocol in ensuring TVWS regulation compliance in multi-hop environments and proposes a logical link MAC sub-layer procedure to fill the gap. A network is considered compliant in this context if each node operates on a channel that it is allowed to use as determined for example, by the spectrum database. Using a combination of prototypical experiments, simulation and numerical analysis, it is shown that the proposed protocol ensures network formation is accomplished in a manner that is compliant with TVWS regulation. Having tackled the compliance problem at the mesh formation level, the next logical step was to explore performance improvement avenues. Considering the importance of routing in WMNs, the study evaluates link characterisation to determine suitable metric for routing purposes. Along this dimension, the research makes two main contributions. Firstly, A-link-metric (Augmented Link Metric) approach for WMN with DSA is proposed. A-link-metric reinforces existing metrics to factor in characteristics of a DSA channel, which is essential to improve the routing protocol's ranking of links for optimal path selection. Secondly, in response to the question of “which one is the suitable metric?”, the Dynamic Path Metric Selection (DPMeS) concept is introduced. The principal idea is to mechanise the routing protocol such that it assesses the network via a distributed probing mechanism and dynamically binds the routing metric. Using DPMeS, a routing metric is selected to match the network type and prevailing conditions, which is vital as each routing metric thrives or recedes in performance depending on the scenario. DPMeS is aimed at unifying the years worth of prior studies on routing metrics in WMNs. Simulation results indicate that A-link-metric achieves up to 83.4 % and 34.6 % performance improvement in terms of throughput and end-to-end delay respectively compared to the corresponding base metric (i.e. non-augmented variant). With DPMeS, the routing protocol is expected to yield better performance consistently compared to the fixed metric approach whose performance fluctuates amid changes in network setup and conditions. By and large, DSA-enabled WMN nodes will require access to some fixed spectrum to fall back on when opportunistic spectrum is unavailable. In the absence of fully functional integrated-chip cognitive radios to enable DSA, the immediate feasible solution for the interim is single hardware platforms fitted with multiple transceivers. This configuration results in multi-band multi-radio node capability that lends itself to a variety of link options in terms of transmit/receive radio functionality. The dissertation reports on the experimental performance evaluation of radios operating in the 5 GHz and UHF-TVWS bands for hybrid back-haul links. It is found that individual radios perform differently depending on the operating parameter settings, namely channel, channel-width and transmission power subject to prevailing environmental (both spectral and topographical) conditions. When aggregated, if the radios' data-rates are approximately equal, there is a throughput and round-trip time performance improvement of 44.5 - 61.8 % and 7.5 - 41.9 % respectively. For hybrid links comprising radios with significantly unequal data-rates, this study proposes an adaptive round-robin (ARR) based algorithm for efficient multilink utilisation. Numerical analysis indicate that ARR provides 75 % throughput improvement. These results indicate that network optimisation overall requires both time and frequency division duplexing. Based on the experimental test results, this dissertation presents a three-layered routing framework for multi-link utilisation. The top layer represents the nodes' logical interface to the WMN while the bottom layer corresponds to the underlying physical wireless network interface cards (WNIC). The middle layer is an abstract and reductive representation of the possible and available transmission, and reception options between node pairs, which depends on the number and type of WNICs. Drawing on the experimental results and insight gained, the study builds criteria towards a mechanism for auto selection of the optimal link option. Overall, this study is anticipated to serve as a springboard to stimulate the adoption and integration of DSA in WMNs, and further development in multi-link utilisation strategies to increase capacity. Ultimately, it is hoped that this contribution will collectively contribute effort towards attaining the global goal of extending connectivity to the unconnected.
- ItemOpen AccessA problem solving system employing a formal approach to means(1976) Finnie, Gavin Ross; McGregor, KenThe thesis describes the theory and design of a general problem-solving system. The system uses a single general heuristic based on a formal definition of differences within the framework of means/ends analysis and employs tree search during problem solution. A comparison is made with two other systems using means/ends analysis. The conditions under which the system is capable of solving problems are investigated and the efficiency of the system is considered. The system has solved a variety of problems of varying complexity and the difference heuristic appears comparatively accurate for goal-directed search within certain limits.
- ItemOpen AccessA semantic Bayesian network for automated share evaluation on the JSE(2021) Drake, Rachel; Moodley, DeshendranAdvances in information technology have presented the potential to automate investment decision making processes. This will alleviate the need for manual analysis and reduce the subjective nature of investment decision making. However, there are different investment approaches and perspectives for investing which makes acquiring and representing expert knowledge for share evaluation challenging. Current decision models often do not reflect the real investment decision making process used by the broader investment community or may not be well-grounded in established investment theory. This research investigates the efficacy of using ontologies and Bayesian networks for automating share evaluation on the JSE. The knowledge acquired from an analysis of the investment domain and the decision-making process for a value investing approach was represented in an ontology. A Bayesian network was constructed based on the concepts outlined in the ontology for automatic share evaluation. The Bayesian network allows decision makers to predict future share performance and provides an investment recommendation for a specific share. The decision model was designed, refined and evaluated through an analysis of the literature on value investing theory and consultation with expert investment professionals. The performance of the decision model was validated through back testing and measured using return and risk-adjusted return measures. The model was found to provide superior returns and risk-adjusted returns for the evaluation period from 2012 to 2018 when compared to selected benchmark indices of the JSE. The result is a concrete share evaluation model grounded in investing theory and validated by investment experts that may be employed, with small modifications, in the field of value investing to identify shares with a higher probability of positive risk-adjusted returns.
- ItemOpen AccessA tool to assess the feasibility of VOIP for contact centres(2005) Venter, Anton; Marsden, GaryWith Voice-over-Internet-Protocol 1 (VO IP), voice calls travel over the same network as data, potentially making the voice network redundant and thereby reducing an organisation's investment in network infrastructure and its support and administration costs. Since voice is the primary communication medium for customer servicing, other benefits could potentially be realized when VOIP is applied in contact centres. However the feasibility of VOIP depends on many factors and makes the evaluation of its feasibility a complex issue. This research proposes an assessment tool to evaluate the feasibility of VOIP in the contact centre(s) of a business, given the current and intended characteristics of the contact, centre and its technology infrastructure. Execution of the assessment requires input from an individual familiar with the current contact centre and its basic technology infrastructure, rather than VOIP itself From past implementations of VOiP and literature available, this research identifies the relevant factors that influence the feasibility of VOIP. These are used to formulate questions that make up a questionnaire. The answers to the questionnaire are applied to a calculation to produce an overall rating of the feasibility of VOIP for the organisation's particular situation. The assessment tool was implemented as a web-based interactive application, which interrogates a user by way of the questionnaire and immediately gives a "score" indicating the feasibility of VOiP as a new technology. The resulting tool also indicates which factors made a considerable negative contribution towards VOIP not being feasible for the particular organisation.
- ItemOpen AccessA user interface for terrain modelling in virtual reality using a head mounted display(2021) Gwynn, Timothy; Gain, JamesThe increased commercial availability of virtual reality (VR) devices has resulted in more content being created for virtual environments (VEs). This content creation has mainly taken place using traditional desktop systems but certain applications are now integrating VR into the creation pipeline. Therefore we look at the effectiveness of creating content, specifically designing terrains, for use in immersive environments using VR technology. To do this, we develop a VR interface for terrain creation based on an existing desktop application. The interface incorporates a head-mounted display and 6 degree of freedom controllers. This allows the mapping of user controls to more natural movements compared to the abstract controls in mouse and keyboard based systems. It also means that users can view the terrain in full 3D due to the inherent stereoscopy of the VR display. The interface goes through three iterations of user centred design and testing. This results in paper and low fidelity prototypes being created before the final interface is developed. The performance of this final VR interface is then compared to the desktop interface on which it was based. We carry out user tests to assess the performance of each interface in terms of speed, accuracy and usability. From our results we find that there is no significant difference between the interfaces when it comes to accuracy but that the desktop interface is superior in terms of speed while the VR interface was rated as having higher usability. Some of the possible reasons for these results, such as users preferring the natural interactions offered by the VR interface but not having sufficient training to fully take advantage of it, are discussed. Finally, we conclude that while it was not shown that either interface is clearly superior, there is certainly room for further exploration of this research area. Recommendations for how to incorporate lessons learned during the creation of this dissertation into any further research are also made.
- ItemOpen AccessA virtual environment authoring interface for content-expert authors(2005) Tangkuampien, Jakkaphan; Marsden, Gary; Blake, Edwin H[pg 47 missing] Since the advent of virtual reality (VR), the technology has been exploited in many areas to aid information transfer. In this respect, virtual reality can be regarded as a medium across which authors can communicate with a target group. However, many experts in non-computer-related areas, looking to exploit VR often come unstuck trying to take advantage of this medium. In these cases, one cannot blame these content-expert authors as they have successfully exploited other media prior to VR. On the other hand, the fault can not lie with the medium itself since it has been effectively exploited by other groups of authors. One probable cause could be the authoring tools themselves, or rather their interfaces to be more accurate. A tool's authoring interface is the only access point into the VR medium and one can only assume that the interfaces are not doing their job effectively. Our study was aimed at investigating authoring interfaces especially from the point of view of content-expert authors. Our approach was to involve such authors who have been able to master existing authoring tool mostly on their own. These authors were in a unique position - having managed to overcome initial difficulties, they have come to understand the inner working of the medium itself. The study was also well-suited to the appreciative inquiry (AI) methodology - a community-centric methodology that has rarely been applied in the area of computer science. Appreciative inquiry, with its root in action research, encourages a similar spiral-based methodology but with positive approach in all phases. With a group of content-expert VR authors, we applied a cycle of AI, resulting first in a list of interface issues that required some attention as well as some idea of how they can be resolved. The second phase of AI involved working closely with the authors to come up with resolution strategies to each of these issues. These solutions were then assessed for the level at which they have addressed their respective issue by another group of content-expert authors. Finally, an online survey was conducted to extend our results to the wider population of content-expert authors. The survey results confirmed that the interface issues discovered applied to the general population and that the proposed solutions were generally thought to be advantageous to the authoring process. Additionally, these positive results were encouraging since it means that our adaptation of AI was successful.
- ItemOpen AccessAccelerated cooperative co-evolution on multi-core architectures(2018) Moyo, Edmore; Kuttel, Michelle; Nitschke, Geoff StuartThe Cooperative Co-Evolution (CC) model has been used in Evolutionary Computation (EC) to optimize the training of artificial neural networks (ANNs). This architecture has proven to be a useful extension to domains such as Neuro-Evolution (NE), which is the training of ANNs using concepts of natural evolution. However, there is a need for real-time systems and the ability to solve more complex tasks which has prompted a further need to optimize these CC methods. CC methods consist of a number of phases, however the evaluation phase is still the most compute intensive phase, for some complex tasks taking as long as weeks to complete. This study uses NE as a test case study and we design a parallel CC processing framework and implement the optimized serial and parallel versions using the Go programming language. Go is a multi-core programming language with first-class constructs, channels and goroutines, that make it well suited to parallel programming. Our study focuses on Enforced Subpopulations (ESP) for single-agent systems and Multi-Agent ESP for multi-agent systems. We evaluate the parallel versions in the benchmark tasks; double pole balancing and prey-capture, for single and multi-agent systems respectively, in tasks of increasing complexity. We observe a maximum speed-up of 20x for the parallel Multi-Agent ESP implementation over our single core optimized version in the prey-capture task and a maximum speedup of 16x for ESP in the harder version of double pole balancing task. We also observe linear speed-ups for the difficult versions of the tasks for a certain range of cores, indicating that the Go implementations are efficient and that the parallel speed-ups are better for more complex tasks. We find that in complex tasks, the Cooperative Co-Evolution Neuro-Evolution (CCNE) methods are amenable to multi-core acceleration, which provides a basis for the study of even more complex CC methods in a wider range of domains.
- ItemOpen AccessAccelerated coplanar facet radio synthesis imaging(2016) Hugo, Benjamin; Gain, James; Smirnov, Oleg; Tasse, CyrilImaging in radio astronomy entails the Fourier inversion of the relation between the sampled spatial coherence of an electromagnetic field and the intensity of its emitting source. This inversion is normally computed by performing a convolutional resampling step and applying the Inverse Fast Fourier Transform, because this leads to computational savings. Unfortunately, the resulting planar approximation of the sky is only valid over small regions. When imaging over wider fields of view, and in particular using telescope arrays with long non-East-West components, significant distortions are introduced in the computed image. We propose a coplanar faceting algorithm, where the sky is split up into many smaller images. Each of these narrow-field images are further corrected using a phase-correcting tech- nique known as w-projection. This eliminates the projection error along the edges of the facets and ensures approximate coplanarity. The combination of faceting and w-projection approaches alleviates the memory constraints of previous w-projection implementations. We compared the scaling performance of both single and double precision resampled images in both an optimized multi-threaded CPU implementation and a GPU implementation that uses a memory-access- limiting work distribution strategy. We found that such a w-faceting approach scales slightly better than a traditional w-projection approach on GPUs. We also found that double precision resampling on GPUs is about 71% slower than its single precision counterpart, making double precision resampling on GPUs less power efficient than CPU-based double precision resampling. Lastly, we have seen that employing only single precision in the resampling summations produces significant error in continuum images for a MeerKAT-sized array over long observations, especially when employing the large convolution filters necessary to create large images.
- ItemOpen AccessAccelerated deconvolution of radio interferometric images using orthogonal matching pursuit and graphics hardware(2016) Van Belle, Jonathan; Gain, James E; Armstrong, RichardDeconvolution of native radio interferometric images constitutes a major computational component of the radio astronomy imaging process. An efficient and robust deconvolution operation is essential for reconstruction of the true sky signal from measured correlator data. Traditionally, radio astronomers have mostly used the CLEAN algorithm, and variants thereof. However, the techniques of compressed sensing provide a mathematically rigorous framework within which deconvolution of radio interferometric images can be implemented. We present an accelerated implementation of the orthogonal matching pursuit (OMP) algorithm (a compressed sensing method) that makes use of graphics processing unit (GPU) hardware, and show significant accuracy improvements over the standard CLEAN. In particular, we show that OMP correctly identifies more sources than CLEAN, identifying up to 82% of the sources in 100 test images, while CLEAN only identifies up to 61% of the sources. In addition, the residual after source extraction is 2.7 times lower for OMP than for CLEAN. Furthermore, the GPU implementation of OMP performs around 23 times faster than a 4-core CPU.
- ItemOpen AccessAccelerating genomic sequence alignment using high performance reconfigurable computers(2008) McMahon, Peter Leonard; Kuttel, Michelle MaryReconfigurable computing technology has progressed to a stage where it is now possible to achieve orders of magnitude performance and power efficiency gains over conventional computer architectures for a subset of high performance computing applications. In this thesis, we investigate the potential of reconfigurable computers to accelerate genomic sequence alignment specifically for genome sequencing applications. We present a highly optimized implementation of a parallel sequence alignment algorithm for the Berkeley Emulation Engine (BEE2) reconfigurable computer, allowing a single BEE2 to align simultaneously hundreds of sequences. For each reconfigurable processor (FPGA), we demonstrate a 61X speedup versus a state-of-the-art implementation on a modern conventional CPU core, and a 56X improvement in performance-per-Watt. We also show that our implementation is highly scalable and we provide performance results from a cluster implementation using 32 FPGAs. We conclude that reconfigurable computers provide an excellent platform on which to run sequence alignment, and that clusters of reconfigurable computers will be able to cope far more easily with the vast quantities of data produced by new ultra-high-throughput sequencers.