Browsing by Subject "Computer Science"
Now showing 1 - 20 of 275
Results Per Page
Sort Options
- ItemOpen Access3D Scan Campaign Classification with Representative Training Scan Selection(2019) Pocock, Christopher; Marais, PatrickPoint cloud classification has been shown to effectively classify points in 3D scans, and can accelerate manual tasks like the removal of unwanted points from cultural heritage scans. However, a classifier’s performance depends on which classifier and feature set is used, and choosing these is difficult since previous approaches may not generalise to new domains. Furthermore, when choosing training scans for campaign-based classification, it is important to identify a descriptive set of scans that represent the rest of the campaign. However, this task is increasingly onerous for large and diverse campaigns, and randomly selecting scans does not guarantee a descriptive training set. To address these challenges, a framework including three classifiers (Random Forest (RF), Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP)) and various point features and feature selection methods was developed. The framework also includes a proposed automatic representative scan selection method, which uses segmentation and clustering to identify balanced, similar or distinct training scans. The framework was evaluated on four labelled datasets, including two cultural heritage campaigns, to compare the speed and accuracy of the implemented classifiers and feature sets, and to determine if the proposed selection method identifies scans that yield a more accurate classifier than random selection. It was found that the RF, paired with a complete multi-scale feature set including covariance, geometric and height-based features, consistently achieved the highest overall accuracy on the four datasets. However, the other classifiers and reduced sets of selected features achieved similar accuracy and, in some cases, greatly reduced training and prediction times. It was also found that the proposed training scan selection method can, on particularly diverse campaigns, yield a more accurate classifier than random selection. However, for homogeneous campaigns where variations to the training set have limited impact, the method is less applicable. Furthermore, it is dependent on segmentation and clustering output, which require campaign-specific parameter tuning and may be imprecise.
- ItemOpen AccessA bluetooth educational content distribution system modelled on a service-oriented architecture(2008) Bugembe, Kamulegeya Grace; Marsden, GaryIn this research, we design and prototype an educational content distribution system modeled on a Service-Oriented Architecture (SOA) paradigm and implemented using Web services, XML and Bluetooth technology. In the prototype, we use an Open Source Learning Management System (LMS) Sakai implemented in Java and branded Vula for the University of Cape Town (UCT). Web services and its specification of SOAP, XML and Bluetooth technology are used to integrate the disparate technologies that form the service architecture. The disparate technologies include among others Bluetooth enabled mobile phones and PDAs, services (modules) which may be running on different operating systems, and deployed over Local Area Networks (LANs) or Internet. The service is meant to leverage the existing infrastructure to provide a new, cheap channel for education content distribution to mobile devices in learning institutions especially Universities in the developing world and Africa in particular. We design, implement and evaluate the prototype for performance and scalability. During the designing and implementation of the architecture, we incorporate SOA principles of service/module re-use, service composition, loose-coupling, standard data exchange within the system or services, and extensibility of the services among others. The aim of the service is to distribute education content uploaded in Learning Management Systems (LMSs) to Bluetooth enabled mobile devices that are increasingly held by students in developing world Universities. The service is intended to supplement existing Web-based and lecture room content distribution channels by opening up the mobile device space. For the prototype, we focus on repackaging structured text content and distributing it to Bluetooth enabled phones and PDAs using Bluetooth technology. We evaluate our prototype for performance using experimental studies.
- ItemOpen AccessA comparison of mobile search interfaces for isiXhosa speakers(2018) Modise, Morebodi; Suleman, HusseinSearch interfaces have for a long time been targeted at the resource-rich languages, such as English. There has been little effort to support African (Bantu) languages in search interfaces when compared to languages such as English, particularly the isiXhosa language. However, due to the increase in use of mobile phones in developing countries, these interfaces can now be adapted to languages in these settings to support information access on the Web. This study proposes mobile search interfaces to support isiXhosa speakers to search for information on the Web using isiXhosa as a discovery language. The isiXhosa language is considered a low-resourced African (Bantu) language spoken in resource-constrained environments in South Africa. The language is spoken by over eight million people. Yet, there has been no search interface specifically targeted at supporting isiXhosa speakers. Two mobile search interfaces were developed on an Android application. The interfaces were text based and voice based. The design of the interfaces was based on feedback from 4 native isiXhosa speakers in a design focus group, and guidelines from the literature. Using the developed interfaces, an experiment was conducted with 34 native isiXhosa speaking students at the University of Cape Town, South Africa. This was done to investigate, which interface could better support isiXhosa speakers to search for information on the Web using mobile phones. Quantitative data was collected using application log files. Additionally, user feedback was then obtained using the standard Software Usability Measurement Inventory (SUMI) instrument, and both interfaces were confirmed as usable. In contrast to what was expected, users preferred the text interface in general, and according to most SUMI subscales. This could be because of greater familiarity with text search interfaces or because of the relative scarcity of voice interfaces in African (Bantu) languages. Where users are not literate, the voice interface may be the only option, so the fact that it was deemed usable is an important independent finding. Search in African (Bantu) language collections is still a largely unexplored field, and more work needs to be done on the interfaces as the algorithms and collections are developed in parallel.
- ItemOpen AccessA tool to assess the feasibility of VOIP for contact centres(2005) Venter, Anton; Marsden, GaryWith Voice-over-Internet-Protocol 1 (VO IP), voice calls travel over the same network as data, potentially making the voice network redundant and thereby reducing an organisation's investment in network infrastructure and its support and administration costs. Since voice is the primary communication medium for customer servicing, other benefits could potentially be realized when VOIP is applied in contact centres. However the feasibility of VOIP depends on many factors and makes the evaluation of its feasibility a complex issue. This research proposes an assessment tool to evaluate the feasibility of VOIP in the contact centre(s) of a business, given the current and intended characteristics of the contact, centre and its technology infrastructure. Execution of the assessment requires input from an individual familiar with the current contact centre and its basic technology infrastructure, rather than VOIP itself From past implementations of VOiP and literature available, this research identifies the relevant factors that influence the feasibility of VOIP. These are used to formulate questions that make up a questionnaire. The answers to the questionnaire are applied to a calculation to produce an overall rating of the feasibility of VOIP for the organisation's particular situation. The assessment tool was implemented as a web-based interactive application, which interrogates a user by way of the questionnaire and immediately gives a "score" indicating the feasibility of VOiP as a new technology. The resulting tool also indicates which factors made a considerable negative contribution towards VOIP not being feasible for the particular organisation.
- ItemOpen AccessA virtual environment authoring interface for content-expert authors(2005) Tangkuampien, Jakkaphan; Marsden, Gary; Blake, Edwin H[pg 47 missing] Since the advent of virtual reality (VR), the technology has been exploited in many areas to aid information transfer. In this respect, virtual reality can be regarded as a medium across which authors can communicate with a target group. However, many experts in non-computer-related areas, looking to exploit VR often come unstuck trying to take advantage of this medium. In these cases, one cannot blame these content-expert authors as they have successfully exploited other media prior to VR. On the other hand, the fault can not lie with the medium itself since it has been effectively exploited by other groups of authors. One probable cause could be the authoring tools themselves, or rather their interfaces to be more accurate. A tool's authoring interface is the only access point into the VR medium and one can only assume that the interfaces are not doing their job effectively. Our study was aimed at investigating authoring interfaces especially from the point of view of content-expert authors. Our approach was to involve such authors who have been able to master existing authoring tool mostly on their own. These authors were in a unique position - having managed to overcome initial difficulties, they have come to understand the inner working of the medium itself. The study was also well-suited to the appreciative inquiry (AI) methodology - a community-centric methodology that has rarely been applied in the area of computer science. Appreciative inquiry, with its root in action research, encourages a similar spiral-based methodology but with positive approach in all phases. With a group of content-expert VR authors, we applied a cycle of AI, resulting first in a list of interface issues that required some attention as well as some idea of how they can be resolved. The second phase of AI involved working closely with the authors to come up with resolution strategies to each of these issues. These solutions were then assessed for the level at which they have addressed their respective issue by another group of content-expert authors. Finally, an online survey was conducted to extend our results to the wider population of content-expert authors. The survey results confirmed that the interface issues discovered applied to the general population and that the proposed solutions were generally thought to be advantageous to the authoring process. Additionally, these positive results were encouraging since it means that our adaptation of AI was successful.
- ItemOpen AccessAccelerated coplanar facet radio synthesis imaging(2016) Hugo, Benjamin; Gain, James; Smirnov, Oleg; Tasse, CyrilImaging in radio astronomy entails the Fourier inversion of the relation between the sampled spatial coherence of an electromagnetic field and the intensity of its emitting source. This inversion is normally computed by performing a convolutional resampling step and applying the Inverse Fast Fourier Transform, because this leads to computational savings. Unfortunately, the resulting planar approximation of the sky is only valid over small regions. When imaging over wider fields of view, and in particular using telescope arrays with long non-East-West components, significant distortions are introduced in the computed image. We propose a coplanar faceting algorithm, where the sky is split up into many smaller images. Each of these narrow-field images are further corrected using a phase-correcting tech- nique known as w-projection. This eliminates the projection error along the edges of the facets and ensures approximate coplanarity. The combination of faceting and w-projection approaches alleviates the memory constraints of previous w-projection implementations. We compared the scaling performance of both single and double precision resampled images in both an optimized multi-threaded CPU implementation and a GPU implementation that uses a memory-access- limiting work distribution strategy. We found that such a w-faceting approach scales slightly better than a traditional w-projection approach on GPUs. We also found that double precision resampling on GPUs is about 71% slower than its single precision counterpart, making double precision resampling on GPUs less power efficient than CPU-based double precision resampling. Lastly, we have seen that employing only single precision in the resampling summations produces significant error in continuum images for a MeerKAT-sized array over long observations, especially when employing the large convolution filters necessary to create large images.
- ItemOpen AccessAccelerated deconvolution of radio interferometric images using orthogonal matching pursuit and graphics hardware(2016) Van Belle, Jonathan; Gain, James E; Armstrong, RichardDeconvolution of native radio interferometric images constitutes a major computational component of the radio astronomy imaging process. An efficient and robust deconvolution operation is essential for reconstruction of the true sky signal from measured correlator data. Traditionally, radio astronomers have mostly used the CLEAN algorithm, and variants thereof. However, the techniques of compressed sensing provide a mathematically rigorous framework within which deconvolution of radio interferometric images can be implemented. We present an accelerated implementation of the orthogonal matching pursuit (OMP) algorithm (a compressed sensing method) that makes use of graphics processing unit (GPU) hardware, and show significant accuracy improvements over the standard CLEAN. In particular, we show that OMP correctly identifies more sources than CLEAN, identifying up to 82% of the sources in 100 test images, while CLEAN only identifies up to 61% of the sources. In addition, the residual after source extraction is 2.7 times lower for OMP than for CLEAN. Furthermore, the GPU implementation of OMP performs around 23 times faster than a 4-core CPU.
- ItemOpen AccessAccelerating genomic sequence alignment using high performance reconfigurable computers(2008) McMahon, Peter Leonard; Kuttel, Michelle MaryReconfigurable computing technology has progressed to a stage where it is now possible to achieve orders of magnitude performance and power efficiency gains over conventional computer architectures for a subset of high performance computing applications. In this thesis, we investigate the potential of reconfigurable computers to accelerate genomic sequence alignment specifically for genome sequencing applications. We present a highly optimized implementation of a parallel sequence alignment algorithm for the Berkeley Emulation Engine (BEE2) reconfigurable computer, allowing a single BEE2 to align simultaneously hundreds of sequences. For each reconfigurable processor (FPGA), we demonstrate a 61X speedup versus a state-of-the-art implementation on a modern conventional CPU core, and a 56X improvement in performance-per-Watt. We also show that our implementation is highly scalable and we provide performance results from a cluster implementation using 32 FPGAs. We conclude that reconfigurable computers provide an excellent platform on which to run sequence alignment, and that clusters of reconfigurable computers will be able to cope far more easily with the vast quantities of data produced by new ultra-high-throughput sequencers.
- ItemOpen AccessAccelerating point cloud cleaning(2017) Mulder, Rickert; Marais, PatrickCapturing the geometry of a large heritage site via laser scanning can produce thousands of high resolution range scans. These must be cleaned to remove unwanted artefacts. We identified three areas that can be improved upon in order to accelerate the cleaning process. Firstly the speed at which the a user can navigate to an area of interest has a direct impact on task duration. Secondly, design constraints in generalised point cloud editing software result in inefficient abstraction of layers that may extend a task duration due to memory pressure. Finally, existing semi-automated segmentation tools have difficulty targeting the diverse set of segmentation targets in heritage scans. We present a point cloud cleaning framework that attempts to improve each of these areas. First, we present a novel layering technique aimed at segmentation, rather than generic point cloud editing. This technique represents 'layers' of related points in a way that greatly reduces memory consumption and provides efficient set operations between layers. These set operations (union, difference, intersection) allow the creation of new layers which aid in the segmentation task. Next, we introduce roll-corrected 3D camera navigation that allows a user to look around freely while reducing disorientation. A user study shows that this camera mode significantly reduces a user's navigation time (29.8% to 57.8%) between locations in a large point cloud thus reducing the overhead between point selection operations. Finally, we show how Random Forests can be trained interactively, per scan, to assist users in a point cloud cleaning task. We use a set of features selected for their discriminative power on a set of challenging heritage scans. Interactivity is achieved by down-sampling training data on the fly. A simple map data structure allows us to propagate labels in the down-sampled data back to the input point set. We show that training and classification on down-sampled point clouds can be performed in under 10 seconds with little effect on accuracy. A user study shows that a user's total segmentation time decreases between 8.9% and 20.4% when our Random Forest classifier is used. Although this initial study did not indicate a significant difference in overall task performance when compared to manual segmentation, performance improvement is likely with multi-resolution features or the use of colour range images, which are now commonplace.
- ItemOpen AccessAccelerating radio transient detection using the Bispectrum algorithm and GPGPU(2015) Lin, Tsu-Shiuan; Gain, James; Armstrong, RichardModern radio interferometers such as those in the Square Kilometre Array (SKA) project are powerful tools to discover completely new classes of astronomical phenomena. Amongst these phenomena are radio transients. Transients are bursts of electromagnetic radiation and is an exciting area of research as localizing pulsars (transient emitters) allow physicists to test and formulate theories on strong gravitational forces. Current methods for detecting transients requires an image of the sky to be produced at every time step. Since interferometers have more information available to them, the computational demands for producing images becomes infeasible due to the larger data sets provided by larger interferometers. Law and Bower (2012) formulated a different approach by using a closure quantity known as the "bispectrum": the product of visibilities around a closed loop of antennae. The proposed algorithm has been shown to be easily parallelized and suitable for Graphics processing units (GPUs).Recent advancements in the field of many core technology such as GPUs has demonstrated significant performance enhancements to many scientific applications. A GPU implementation of the bispectrum algorithm has yet to be explored. In this thesis, we present a number of modified implementations of the bispectrum algorithm, allowing both instruction-level and data-level parallelism. Firstly, a multi-threaded CPU version is developed in C++ using OpenMP and then compared to a GPU version developed using Compute Unified Device Architecture (CUDA).In order to verify validity of the implementations presented, the implementations were firstly run on simulated data created from MeqTrees: a tool for simulating transients developed by the SKA. Thereafter, data from the Karl Jansky Very Large Array (JVLA) containing the B0355+54pulsar was used to test the implementation on real data. This research concludes that the bispectrum algorithm is well suited for both CPU and GPU implementations as we achieved a 3.2x speed up on a 4-core multi-threaded CPU implementation over a single thread implementation. The GPU implementation on a GTX670, achieved about a 20 times speed-up over the multi-threaded CPU implementation. These results show that the bispectrum algorithm will open doors to a series of efficient transient surveys suitable for modern data-intensive radio interferometers.
- ItemOpen AccessAcceleration of the noise suppression component of the DUCHAMP source-finder(2015) Badenhorst, Scott James; Kuttel, Michelle Mary; Blyth, Sarah-LouiseThe next-generation of radio interferometer arrays - the proposed Square Kilometre Array (SKA) and its precursor instruments, The Karoo Array Telescope (MeerKAT) and Australian Square Kilometre Path finder (ASKAP) - will produce radio observation survey data orders of magnitude larger than current sizes. The sheer size of the imaged data produced necessitates fully automated solutions to accurately locate and produce useful scientific data for radio sources which are (for the most part) partially hidden within inherently noisy radio observations (source extraction). Automated extraction solutions exist but are computationally expensive and do not yet scale to the performance required to process large data in practical time-frames. The DUCHAMP software package is one of the most accurate source extraction packages for general (source shape unknown) source finding. DUCHAMP's accuracy is primarily facilitated by the à trous wavelet reconstruction algorithm, a multi-scale smoothing algorithm which suppresses erratic observation noise. This algorithm is the most computationally expensive and memory intensive within DUCHAMP and consequently improvements to it greatly improve overall DUCHAMP performance. We present a high performance, multithreaded implementation of the à trous algorithm with a focus on 'desktop' computing hardware to enable standard researchers to do their own accelerated searches. Our solution consists of three main areas of improvement: single-core optimisation, multi-core parallelism and the efficient out-of-core computation of large data sets with memory management libraries. Efficient out-of-core computation (data partially stored on disk when primary memory resources are exceeded) of the à trous algorithm accounts for 'desktop' computing's limited fast memory resources by mitigating the performance bottleneck associated with frequent secondary storage access. Although this work focuses on 'desktop' hardware, the majority of the improvements developed are general enough to be used within other high performance computing models. Single-core optimisations improved algorithm accuracy by reducing rounding error and achieved a 4X serial performance increase which scales with the filter size used during reconstruction. Multithreading on a quad-core CPU further increased performance of the filtering operations within reconstruction to 22X (performance scaling approximately linear with increased CPU cores) and achieved 13X performance increase overall. All evaluated out-of-core memory management libraries performed poorly with parallelism. Single-threaded memory management partially mitigated the slow disk access bottleneck and achieved a 3.6X increase (uniform for all tested large data sets) for filtering operations and a 1.5X increase overall. Faster secondary storage solutions such as Solid State Drives or RAID arrays are required to process large survey data on 'desktop' hardware in practical time-frames.
- ItemOpen AccessAccess and information flow control to secure mobile web service compositions in resource constrained environments(2015) Maziya, Lwazi Enock; Kayem, AnneThe growing use of mobile web services such as electronic health records systems and applications like twitter, Facebook has increased interest in robust mechanisms for ensuring security for such information sharing services. Common security mechanisms such as access control and information flow control are either restrictive or weak in that they prevent applications from sharing data usefully, and/or allow private information leaks when used independently. Typically, when services are composed there is a resource that some or all of the services involved in the composition need to share. However, during service composition security problems arise because the resulting service is made up of different services from different security domains. A key issue that arises and that we address in this thesis is that of enforcing secure information flow control during service composition to prevent illegal access and propagation of information between the participating services. This thesis describes a model that combines access control and information flow control in one framework. We specifically consider a case study of an e-health service application, and consider how constraints like location and context dependencies impact on authentication and authorization. Furthermore, we consider how data sharing applications such as the e-health service application handle issues of unauthorized users and insecure propagation of information in resource constrained environments¹. Our framework addresses this issue of illegitimate information access and propagation by making use of the concept of program dependence graphs (PDGs). Program dependence graphs use path conditions as necessary conditions for secure information flow control. The advantage of this approach to securing information sharing is that, information is only propagated if the criteria for data sharing are verified. Our solution proposes or offers good performance, fast authentication taking into account bandwidth limitations. A security analysis shows the theoretical improvements our scheme offers. Results obtained confirm that the framework accommodates the CIA-triad (which is the confidentiality, integrity and availability model designed to guide policies of information security) of our work and can be used to motivate further research work in this field.
- ItemOpen AccessActive shape model segmentation of Brain structures in MR images of subjects with fetal alcohol spectrum disorder(2010) Eicher, Anton; Marais, Patrick; Meintjes, ErnestaFetal Alcohol Spectrum Disorder (FASD) is the most common form of preventable mental retardation worldwide. This condition affects children whose mothers excessively consume alcohol whilst pregnant. FASD can be identified by physical and mental defects, such as stunted growth, facial deformities, cognitive impairment, and behavioural abnormalities. Magnetic Resonance Imaging provides a non-invasive means to study the neural correlates of FASD. One such approach aims to detect brain abnormalities through an assessment of volume and shape of sub-cortical structures on high-resolution MR images.
- ItemOpen AccessAdapting a novel public display system for an educational context(2010) Pedzai, CalvinUniversities in developing nations are viewed as gateways to global knowledge and as the source of human capital for their countries' economies (Juma, 2008). However, these universities face challenges in accessing educational information over the Internet due to high bandwidth costs, low literacy rates and the difficulty of setting up expensive computer labs. For example, at the University of Cape Town, labs are often overcrowded and fewer learners gaining access to information. One innovative solution to this problem has been realized through the adoption of mobile phones as PC terminal replacements in developing countries. There has been a steady increase in the adoption of mobile phones due to their ease of use and affordability (Juma, 2008). By harnessing this technology's potential, we believe a sustainable and cost-effective solution to support student needs can be developed for universities in developing countries.
- ItemOpen AccessAn adaptive agent architecture for exogenous data sales forecasting(2006) Jedeikin, Jonathan; Potgieter, Anet; April, KurtIn a world of unpredictability and complexity, sales forecasting is becoming recognised as essential to operations planning in business and industry. With increased globalisation and higher competition, more products are being developed at more locations, but with shorter product lifecycles. As technology improves, more sophisticated sales forecasting systems are developed which require increasing complexity. We tum to adaptive agent architectures to consider an alternative approach for modelling complex sales forecasting systems. This research proposes modelling a sales forecasting system using an adaptive agent architecture. It additionally investigates the suitability of Bayesian networks as a sales forecasting technique. This is achieved through BaBe, an adaptive agent architecture which employs Bayesian networks as internal models. We develop a sales forecasting system for a meat wholesale company whose sales are largely affected by exogenous factors. The company's current sales forecasting approach is solely qualitative, and the nature of their sales is such that they would benefit from a reliable exogenous data sales forecasting system. We implement the system using BaBe, and incorporate a Bayesian network representing the causal relationships affecting sales. We introduce a learning adjustment component to adjust the estimated sales towards closer approximations. This is required as BaBe is currently unable to use continuous data, resulting in a loss of accuracy during discretisation. The learning adjustment additionally provides a feedback aspect, often found in adaptive agent architectures. The adjustment algorithm is based on the mean error calculation, commonly used as sales forecasting performance measures, but is extended to incorporate a number of exogenous variables. We test the system using the holdout procedure, with a 5-fold cross validation data-splitting approach, and contrast the accuracy of the estimated sales, provided by the system, with sales estimated using a regression approach. We additionally investigate the effectiveness of the learning adjustment component.
- ItemOpen AccessAn Adjectival Interface for procedural content generation(2008) Hultquist, Carl; Gain, James; Cairns, DavidIn this thesis, a new interface for the generation of procedural content is proposed, in which the user describes the content that they wish to create by using adjectives. Procedural models are typically controlled by complex parameters and often require expert technical knowledge. Since people communicate with each other using language, an adjectival interface to the creation of procedural content is a natural step towards addressing the needs of non-technical and non-expert users. The key problem addressed is that of establishing a mapping between adjectival descriptors, and the parameters employed by procedural models. We show how this can be represented as a mapping between two multi-dimensional spaces, adjective space and parameter space, and approximate the mapping by applying novel function approximation techniques to points of correspondence between the two spaces. These corresponding point pairs are established through a training phase, in which random procedural content is generated and then described, allowing one to map from parameter space to adjective space. Since we ultimately seek a means of mapping from adjective space to parameter space, particle swarm optimisation is employed to select a point in parameter space that best matches any given point in adjective space. The overall result, is a system in which the user can specify adjectives that are then used to create appropriate procedural content, by mapping the adjectives to a suitable set of procedural parameters and employing the standard procedural technique using those parameters as inputs. In this way, none of the control offered by procedural modelling is sacrificed â although the adjectival interface is simpler, it can at any point be stripped away to reveal the standard procedural model and give users access to the full set of procedural parameters. As such, the adjectival interface can be used for rapid prototyping to create an approximation of the content desired, after which the procedural parameters can be used to fine-tune the result. The adjectival interface also serves as a means of intermediate bridging, affording users a more comfortable interface until they are fully conversant with the technicalities of the underlying procedural parameters. Finally, the adjectival interface is compared and contrasted to an interface that allows for direct specification of the procedural parameters. Through user experiments, it is found that the adjectival interface presented in this thesis is not only easier to use and understand, but also that it produces content which more accurately reflects usersâ intentions.
- ItemOpen AccessAdoption of a visual model for temporal database representation(2016) Shunmugam, Tamindran; Keet, Catharina; Kuttel, Michelle MaryToday, in the world of information technology, conceptual model representation of database schemas is challenging for users both in the Software Development Life Cycle (SDLC) and the Human-Computer Interaction (HCI) domain. The primary way to resolve this issue, in both domains, is to use a model that is concise, interpretable and clear to understand, yet encompasses all of the required information to be able to clearly define the database. A temporal database is understood as a database capable of supporting reasoning of time-based data for e.g.: a temporal database can answer questions such as: - for what period was Mrs Jones single before she got married? On the other hand, an atemporal database stores data that is valid today and has no history. In the thesis, I looked at different theoretical temporal visual conceptual models proposed by temporal researchers and aimed, by means of a user-survey consisting of business users, to ascertain towards which models users a preference has. I further asked the users for firstly; whether they prefer textual or graphical representations for the entities, attributes and constraints represented by the visual models, or secondly; whether there is a preference for a specific graphical icon for the temporal entities and lastly; to ascertain if the users show a preference towards a specific theoretical temporal conceptual model. The methodology employed to reach my goal in this thesis, is one of experiments on business users with knowledge enhancements after each experiment. Users were to perform a task, and then based on analysis of the task results, they are taught additional temporal aspects so as improve their knowledge before the next experiment commences. The ultimate aim was to extract a visual conceptual model preference from business users with enhanced knowledge of temporal aspects. This is the first work done in this field and thus will aid researchers in future work, as they will have a temporal conceptual model that promotes effective communication, understandability and interpretability.
- ItemOpen AccessAdoption of ICT4D frameworks to support screening for depression in Nigerian universities(2018) Ojeme, Blessing Onuwa; Meyer, Thomas; Mbogho, AudreyHealth is fundamental to development and access to healthcare is a major health and development issue particularly in developing countries where preventable diseases and premature deaths still inflict a high toll. In Nigeria, for instance, under-financing, inefficient allocation of limited medical resources has led to quantitative and qualitative deficiencies in depression identification, and to growing gaps in facility and equipment upkeep. The focus of the present study is Nigerian University students who are at higher risk of clinical depression than other populations. Besides high crime rate, acute unemployment, terrorism, extreme poverty and serial outbreak of diseases, which are everyday life situations that trigger depression for a large proportion of Nigerian population, Nigerian University students are faced with additional problems of poor living and academic conditions. These include constant problems of accommodation and overcrowded lecture halls caused by increasing population of students, recurrent disruptions of academic calendar, heavy cigarette smoking and high level of alcohol consumption. Effective prevention of medical condition and access to healthcare resources are important factors that affect peoples’ welfare and quality of life. Regular assessment for depression has been suggested as the first important steps to its early detection and prevention. Investigations revealed that, besides the peculiar shortage in mental health professionals in Nigeria, the near absence of modern diagnostic facilities has made the management of this potentially detrimental problem impossible. Given this national health problem, and that it would take some time before resources, especially human, can be mustered, calls have been made by several bodies that other viable means that take cognisance of the difficulties of assessing mental healthcare be sought. This study is an attempt at exploring opportunities to increase flexibility in depression prevention and detection processes. The study investigated the effectiveness of developing computer-based methodologies, derived from machine learning and human computer interaction techniques for guiding depression identification process in Nigerian universities. Probabilistic Bayesian networks was used to construct models from real depression datasets that included 1798 data instances, collected from the mental health unit of University of Benin Teaching Hospital (UBTH) and primary care centre in Nigeria. The models achieved high performance on standard metrics, including: 94.3% accuracy, 94.4% precision, 0.943 F-Measure, 0.150 RSME, 0.923 R and 92.2% ROC. The findings from the information gain and mutual information show high correlation between “depression” and “alcohol or other drug consumption”; “depression” and family support and availability of accommodation”, but low correlation between “depression” and “cigarette smoking”. The results also show high correlation between “depression” and a synergistic combination of “impaired function and alcohol and other drug consumption”. Following the User-Centered design approach, a desktop-based screening tool was developed for use by University academic staff, as a first step, for regular screening of staff and students for depression, and where necessary, schedule appointment with the appropriate mental health authority for further diagnosis. Though the interesting results from the heuristic evaluations illuminate the challenges involved, it demonstrates the significance and relevance of end-user factors in the process of designing computer-aided screening intervention, especially with respect to acceptance of the system for use in non-clinical environment. The findings presented in this doctoral study provide compelling evidence of the huge potential that the collaboration of machine learning and usability techniques has for complementing available resources in the management of depression among University population in Nigeria. It is hoped that, given the persistent challenges of depression, the findings will be part of the ongoing global research to encourage the adoption of ICT4D frameworks for the prevention of more serious cases by empowering other population for an early first-line depression screening.
- ItemOpen AccessAdvancing security information and event management frameworks in managed enterprises using geolocation(2015) Khan, Herah Anwar; Hutchison, AndrewSecurity Information and Event Management (SIEM) technology supports security threat detection and response through real-time and historical analysis of security events from a range of data sources. Through the retrieval of mass feedback from many components and security systems within a computing environment, SIEMs are able to correlate and analyse events with a view to incident detection. The hypothesis of this study is that existing Security Information and Event Management techniques and solutions can be complemented by location-based information provided by feeder systems. In addition, and associated with the introduction of location information, it is hypothesised that privacy-enforcing procedures on geolocation data in SIEMs and meta- systems alike are necessary and enforceable. The method for the study was to augment a SIEM, established for the collection of events in an enterprise service management environment, with geo-location data. Through introducing the location dimension, it was possible to expand the correlation rules of the SIEM with location attributes and to see how this improved security confidence. An important co-consideration is the effect on privacy, where location information of an individual or system is propagated to a SIEM. With a theoretical consideration of the current privacy directives and regulations (specifically as promulgated in the European Union), privacy supporting techniques are introduced to diminish the accuracy of the location information - while still enabling enhanced security analysis. In the context of a European Union FP7 project relating to next generation SIEMs, the results of this work have been implemented based on systems, data, techniques and resilient features of the MASSIF project. In particular, AlienVault has been used as a platform for augmentation of a SIEM and an event set of several million events, collected over a three month period, have formed the basis for the implementation and experimentation. A "brute-force attack" misuse case scenario was selected to highlight the benefits of geolocation information as an enhancement to SIEM detection (and false-positive prevention). With respect to privacy, a privacy model is introduced for SIEM frameworks. This model utilises existing privacy legislation, that is most stringent in terms of privacy, as a basis. An analysis of the implementation and testing is conducted, focusing equally on data security and privacy, that is, assessing location-based information in enhancing SIEM capability in advanced security detection, and, determining if privacy-enforcing procedures on geolocation in SIEMs and other meta-systems are achievable and enforceable. Opportunities for geolocation enhancing various security techniques are considered, specifically for solving misuse cases identified as existing problems in enterprise environments. In summary, the research shows that additional security confidence and insight can be achieved through the augmentation of SIEM event information with geo-location information. Through the use of spatial cloaking it is also possible to incorporate location information without com- promising individual privacy. Overall the research reveals that there are significant benefits for SIEMs to make use of geo-location in their analysis calculations, and that this can be effectively conducted in ways which are acceptable to privacy considerations when considered against prevailing privacy legislation and guidelines.
- ItemOpen AccessAn agent based layered framework to facilitate intelligent Wireless Sensor Networks(2011) Scholtz, Andre; Le, HanhWireless Sensor Networks (WSNs) are networks of small, typically low-cost hardware devices which are able to sense various physical phenomenon in their surrounding environments. These simple nodes are also able to perform basic processing and wirelessly communicate with each other. The power of these networks arise from their ability to combine their many vantage points of the individual nodes and to work together. This allows for behaviour to emerge which is greater than the sum of the ability of all the nodes in the network. The complexity of these networks varies based on the application domain and the physical phenomenon being sensed. Although sensor networks are currently well understood and used in a number of real world applications, a number limitations still exit. This research aims to overcome a number of issues faced by current WSNs, the largest of which is their monolithic or tightly coupled structure which result in static and application specific WSNs. We aim to overcome these issues by designing a dynamically reconfigurable system which is application neutral. The proposed system is also required to facilitate intelligence and be sufficiently efficient for low power sensor node hardware.