Browsing by Author "Nitschke, Geoff Stuart"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
- ItemOpen AccessAccelerated cooperative co-evolution on multi-core architectures(2018) Moyo, Edmore; Kuttel, Michelle; Nitschke, Geoff StuartThe Cooperative Co-Evolution (CC) model has been used in Evolutionary Computation (EC) to optimize the training of artificial neural networks (ANNs). This architecture has proven to be a useful extension to domains such as Neuro-Evolution (NE), which is the training of ANNs using concepts of natural evolution. However, there is a need for real-time systems and the ability to solve more complex tasks which has prompted a further need to optimize these CC methods. CC methods consist of a number of phases, however the evaluation phase is still the most compute intensive phase, for some complex tasks taking as long as weeks to complete. This study uses NE as a test case study and we design a parallel CC processing framework and implement the optimized serial and parallel versions using the Go programming language. Go is a multi-core programming language with first-class constructs, channels and goroutines, that make it well suited to parallel programming. Our study focuses on Enforced Subpopulations (ESP) for single-agent systems and Multi-Agent ESP for multi-agent systems. We evaluate the parallel versions in the benchmark tasks; double pole balancing and prey-capture, for single and multi-agent systems respectively, in tasks of increasing complexity. We observe a maximum speed-up of 20x for the parallel Multi-Agent ESP implementation over our single core optimized version in the prey-capture task and a maximum speedup of 16x for ESP in the harder version of double pole balancing task. We also observe linear speed-ups for the difficult versions of the tasks for a certain range of cores, indicating that the Go implementations are efficient and that the parallel speed-ups are better for more complex tasks. We find that in complex tasks, the Cooperative Co-Evolution Neuro-Evolution (CCNE) methods are amenable to multi-core acceleration, which provides a basis for the study of even more complex CC methods in a wider range of domains.
- ItemOpen AccessAPIC: A method for automated pattern identification and classification(2017) Goss, Ryan Gavin; Nitschke, Geoff StuartMachine Learning (ML) is a transformative technology at the forefront of many modern research endeavours. The technology is generating a tremendous amount of attention from researchers and practitioners, providing new approaches to solving complex classification and regression tasks. While concepts such as Deep Learning have existed for many years, the computational power for realising the utility of these algorithms in real-world applications has only recently become available. This dissertation investigated the efficacy of a novel, general method for deploying ML in a variety of complex tasks, where best feature selection, data-set labelling, model definition and training processes were determined automatically. Models were developed in an iterative fashion, evaluated using both training and validation data sets. The proposed method was evaluated using three distinct case studies, describing complex classification tasks often requiring significant input from human experts. The results achieved demonstrate that the proposed method compares with, and often outperforms, less general, comparable methods designed specifically for each task. Feature selection, data-set annotation, model design and training processes were optimised by the method, where less complex, comparatively accurate classifiers with lower dependency on computational power and human expert intervention were produced. In chapter 4, the proposed method demonstrated improved efficacy over comparable systems, automatically identifying and classifying complex application protocols traversing IP networks. In chapter 5, the proposed method was able to discriminate between normal and anomalous traffic, maintaining accuracy in excess of 99%, while reducing false alarms to a mere 0.08%. Finally, in chapter 6, the proposed method discovered more optimal classifiers than those implemented by comparable methods, with classification scores rivalling those achieved by state-of-the-art systems. The findings of this research concluded that developing a fully automated, general method, exhibiting efficacy in a wide variety of complex classification tasks with minimal expert intervention, was possible. The method and various artefacts produced in each case study of this dissertation are thus significant contributions to the field of ML.
- ItemOpen AccessAutomated Ligand Design in Simulated Molecular Docking - Optimising ligand binding affinity through the application of deep Q-learning to docking simulations(2022) Maccallum, Robert; Nitschke, Geoff StuartThe drug discovery process broadly follows the sequence of high-throughput screening, optimisation, synthesis, testing, and finally, clinical trials. We investigate methods for accelerating this process with machine learning algorithms that can automatically design novel ligands for biological targets. Recent work has demonstrated the viability of deep reinforcement learning, generative adversarial networks and auto-encoders. Here, we extend state-of-the-art deep reinforcement learning molecular modification algorithms and, through the integration of molecular docking simulations, apply them to automatically design novel antagonists for the adenosine triphosphate binding site of Plasmodium falciparum phosphatidylinositol 4-kinase, an enzyme essential to the malaria parasite's development within an infected host.
- ItemOpen AccessEvolution of sun-shades outside building facades(2023) Coetzee, Leon; Nitschke, Geoff StuartThe research objective behind this study is to compare ‘traditional' architectural sun-shades with evolved sun- shades to determine which best blocks direct sunlight from entering a window. Two geographical locations are tested along with two fa ̧cade conditions for each. The sun path on the summer solstice provides the projected sun rays, measured every fifteen seconds. The sun-shades are made up of points in 3D-space that form a ‘point cloud'. The points can be connected to form a surface and from there a geometric form. An Evolutionary Strategy, using self-adaptation, evolves the points within the point cloud to generate the sun- shade. Fitness for each point is determined by the number of sun rays the point can block from striking the win- dow surface; furthermore, the point may not obstruct the view from the window given certain conditions. The mean fitness for ten ‘traditional' architectural sun-shade solutions represented as point clouds, is compared to the mean fitness of the evolved sun-shade point cloud. This study provides two contributions to this field; firstly it provides a method to measure the fitness of ‘tra- ditional' sun-shades solutions and compares them with evolved solutions, secondly it provides a form for the so- lution. Architecturally, the form the evolved sun-shade takes becomes interesting. Finally, some possible improvements and modifications are further discussed.
- ItemOpen AccessGaining Perspective with an Evolutionary Cognitive Architecture for Intelligent Agents(2022) Jones, David Griffin; Nitschke, Geoff StuartThis thesis targets the boundary surrounding the creation of strong AI using AutoML (Automatic Machine Learning) through the development of a general cognitive architecture called Brain Evolver. To do this, the notion of what intelligence is in the context of machines and how it can practically be applied to physical intelligent agents is explored. Some core components that make up what a potentially strong AI system must possess are identified and outlined as basic task completion, exploration, scalability, noise reduction, generalization, memory, and credit-assignment. A wide set of tests that target these components are used to test the general capabilities of Brain Evolver as well as some more high-level tests that abstractly simulate space rover mission tasks. The notion of perspective and how it pertains to solving problems using appropriate levels of generalisation and historical information without explicitly storing all memory is also a subtle focus. Brain Evolver was developed using hypothetical reasoning from the literature reviewed and uses a modular design. All modules are implemented with evolutionary approaches and include Deep Neural Evolution, Relevance Estimation and Value Calibration of Evolutionary Algorithm Parameters, Meta Learning Shared Hierarchies, Attention, Spiking Neural Networks, and Guided Epsilon Exploration (a novel method). The relevance of these components in different combinations are analysed in the varying contexts of each test environment in order to gain insights and contribute to the body of evolutionary research targeted towards general problem solvers. The predictions made regarding the effect each module would have on each type of task proved to be unreliable and the program struggled with efficiency. However, Brain Evolver was still able to successfully and adequately solve all but one of the test environments in a completely autonomous way.
- ItemOpen AccessMachine learning in predictive analytics on judicial decision-making(2021) Pienaar, Celia; Nitschke, Geoff StuartLegal professionals globally are under pressure to provide ‘more for less' – not an easy challenge in the era of big data, increasingly complex regulatory and legislative frameworks and volatile financial markets. Although largely limited to information retrieval and extraction, Machine Learning applications targeted at the legal domain have to some extent become mainstream. The startup market is rife with legal technology providers with many major law firms encouraging research and development through formal legal technology incubator programs. Experienced legal professionals are expected to become technologically astute as part of their response to the ‘more for less' challenge, while legal professionals on track to enter the legal services industry are encouraged to broaden their skill sets beyond a traditional law degree. Predictive analytics applied to judicial decision-making raise interesting discussions around potential benefits to the general public, over-burdened judicial systems and legal professionals respectively. It is also associated with limitations and challenges around manual input required (in the absence of automatic extraction and prediction) and domain-specific application. While there is no ‘one size fits all' solution when considering predictive analytics across legal domains or different countries' legal systems, this dissertation aims to provide an overview of Machine Learning techniques which could be applied in further research, to start unlocking the benefits associated with predictive analytics on a greater (and hopefully local) scale.
- ItemOpen AccessNeuro-evolution behavior transfer for collective behavior tasks(2018) Didi, Sabre Z; Nitschke, Geoff StuartThe design of effective, robust and autonomous controllers for multi-agent and multi-robot systems is a long-standing problem in the fields of computational intelligence and robotics. Whilst nature-inspired problem-solving techniques such as reinforcement learning (RL) and evolutionary algorithms (EA) are often used to adapt controllers for solving such tasks, the complexity of such tasks increases with the addition of more agents (or robots) in difficult environments. This is due to specific issues related to task complexity, such as the curse of dimensionality and bootstrapping problems. Despite an increasing attempt over the last decade to incorporate behavior (knowledge) transfer in machine learning so that relevant behavior acquired in previous learning experiences can be used to boost task performance in complex tasks, using evolutionary algorithms to facilitate behavior transfer (especially multi-agent behavior transfer) has received little attention. It remains unclear how behavior transfer addresses issues such as the bootstrapping problem in complex multi-agent tasks (for example, RoboCup soccer). This thesis seeks to investigate and establish the essential features constituting effective and efficient evolutionary search to augment behavior transfer for boosting the quality of evolved behaviors across increasingly complex tasks. Experimental results indicate a hybrid of objective-based search and behavioral diversity maintenance in evolutionary controller design coupled with behavior transfer yields evolved behaviors of significantly high quality across increasingly complex multi-agent tasks. The evolutionary controller design method thus addresses the bootstrapping task for the given range of multi-agent tasks, whilst comparative controller design methods yield scant performance results.
- ItemOpen AccessSelf-adapting simulated artificial societies(2023) Gower-Winter, Brandon; Nitschke, Geoff StuartAgent-Based Models (ABM) are computational models that utilize autonomous agents to interact and adapt to the environments in which they occupy. They are used in fields ranging from Economics to Ecology. More recently, ABM are being used in Computational Archaeology to aid in explaining the complex social phenomena that gave rise to ancient societies all over the world. Despite their potential, ABM are limited by the fact their agents are rarely adaptive despite adaptability often touted as one of Agent-Based Modelling's greatest strengths. In this work we remedy this by investigating whether Machine Learning (ML) algorithms can be used as adaptive mechanisms for Agent-based Models simulating complex social phenomena. We aim to do this by comparing ML agents, developed using Reinforcement Learning and two Evolutionary Algorithms as adaptive-mechanisms, to rule-based agents typically found in contemporary literature. To achieve this, we create NeoCOOP, an Agent-Based Model designed to simulate the complex social phenomena that arise from resource sharing agents in ancient societies. By conducting scenario experimentation, we examined the adaptive capacity of our four agent-types by measuring their ability to maintain both population and resources levels in a virtual re-creation of Ancient Egypt during the Predynastic Period. Our results indicate that our ML agents (Utility and IE) perform better or on par with even complex rule-based agents (Traditional and RBAdaptive). The IE agent-type ranked first and was the most adaptive agent-type. The Utility and RBAdaptive agents jointly ranked second and the Traditional agent ranked last. Overall, the findings of this work clearly show that adaptive-agents are more suited to modelling the dynamics of complex environments than their rule-based counterparts. More specifically, our results demonstrate that ML algorithms are particularly well suited as these adaptive mechanisms given that they not only allowed our agents to maintain high population and resource levels, they facilitated the emergence of additional emergent phenomena such as resource acquisition strategy specialization. It is our hope that the findings presented in this work pushes the state of the art such that future research endeavours seek to use truly adaptive-agents in their complex Archaeological ABM
- ItemOpen AccessSpectral analysis of neutral evolution(2017) Shorten, David; Nitschke, Geoff Stuart; Eiben, AgostonIt has been argued that much of evolution takes place in the absence of fitness gradients. Such periods of evolution can be analysed by examining the mutational network formed by sequences of equal fitness, that is, the neutral network. It has been demonstrated that, in large populations under a high mutation rate, the population distribution over the neutral network and average mutational robustness are given by the principal eigenvector and eigen- value, respectively, of the network's adjacency matrix. However, little progress has been made towards understanding the manner in which the topology of the neutral network influences the resulting population distribution and robustness. In this work, we build on recent results from spectral graph theory and utilize numerical methods to enhance our understanding of how populations distribute themselves over neutral networks. We demonstrate that, in the presence of certain topological features, the population will undergo an exploration catastrophe and become confined to a small portion of the network. We further derive approximations, in terms of mutational biases, for the population distribution and average robustness in networks with a homogeneous structure. The applicability of these results is explored, first, by a detailed review of the literature in both evolutionary computing and biology concerning the structure of neutral networks. This is extended by studying the actual and predicted population distribution over the neutral networks of H1N1 and H3N2 influenza haemagglutinin during seasons between 2005 and 2016. It is shown that, in some instances, these populations experience an exploration catastrophe. These results provide insight into the behaviour of populations on neutral networks, demonstrating that neutrality does not necessarily lead to an exploration of genotype/phenotype space or an associated increase in population diversity. Moreover, they provide a plausible explanation for conflicting results concerning the relationship between robustness and evolvability.
- ItemOpen AccessThe impact of behavioural diversity in the evolution of multi-agent systems robust to dynamic environments(2023) Hallauer, Scott; Nitschke, Geoff StuartBehavioural diversity has been shown to be beneficial in biological social systems, such as insect colonies and human societies, as well as artificial systems such as large-scale swarm robotics applications. Evolutionary swarm robotics is a popular experimental platform for demonstrating the emergence of various social phenomena and collective behaviour, including behavioural diversity and specialisation. However, from an automated design perspective, the evolutionary conditions necessary to synthesise optimal collective behaviours that function across increasingly complex environments remains unclear. Thus, we introduce a comparative study of behavioural diversity maintenance methods (based on the MAP-Elites algorithm) versus those without behavioural diversity mechanisms (based on the steady-state genetic algorithm), as a means to evolve suitable degrees of behavioural diversity over increasingly difficult collective behaviour tasks. For this purpose, a collective sheep-dog herding task is simulated which requires the evolved robots (dogs) to capture a dispersed flock of agents (sheep) in a target zone. Different methods for evolving both homogeneous and heterogeneous swarms are investigated, including a novel approach for optimising swarm allocations of pre-evolved, behaviourally diverse controllers. In support of previous work, experiment results demonstrate that behavioural diversity can be generated without specific speciation mechanisms or geographical isolation in the task environment. Furthermore, we exhibit significantly improved task performance for heterogeneous swarms generated by our novel allocation evolution approach, when compared with separate homogeneous swarms using identical controllers. The introduction of this multi-step method for evolving swarm-controller allocations represents the major contribution of this work.
- ItemOpen AccessThe performance of coevolutionary topologies in developing competitive tree manipulation strategies for symbolic regression(2020) Ombura, Martin; Nitschke, Geoff StuartComputer bugs and tests are antagonistic elements of the software development process, with the former attempting to corrupt a program and the latter aiming to identify and fix the introduced faults. The automation of bug identification and repair schemes through automated software testing is an area of research that has only seen success in niche areas of software development but has failed to progress into general areas of computing due to the complexity and diversity of programming languages, codebases and developer coding practices. Unlike traditional engineering fields such as mechanical or civil where project specifications are carefully outlined and built towards, software engineering suffers from a lack of global standardization required to “build from a spec”. In this study we investigate a coevolutionary spec-based approach to dynamically damage and repair programs mathematical programs (functions). We opt for mathematical functions instead of software due to their functional similarities and simpler syntax and semantics. We utilize symbolic regression (SR) as a framework to analyze the error maximized by bugs and minimized by test. We adopt a hybrid evolutionary algorithm (EA) that implements the tree based phenotypic structure of genetic programming (GP) and the list-based chromosome of genetic algorithm (GA) that permits embedding of mathematical tree manipulation (MTM) strategies, as well as adequate selection mechanisms for search. Bugs utilize the MTM strategies in their chromosome to manipulate the input program (IP) with the aim of maximizing the error while tests adopt a set of their own MTM strategies to repair the damaged program using a spec generated from the IP to guide the repair process. Both adversarial agents are investigated in four common coevolutionary topologies, Hall of Fame (HoF), K-Random Tournaments (KRT), Round Robin (RR) and Single Elimination Tournament (SET). We ran 1556 simulations each generating a random polynomial that the bugs and tests would have to contend over in all 4 topologies. We observed that KRT with a low k value of 5 performs best from a computational and fitness standpoint for all bugs and tests. Bugs were dominant in nearly all topologies for all polynomial complexities, whereas tests struggled in the HoF, RR and SET topologies as the input programs became more complex. The competitive landscape however was quite chaotic with the best individuals lasting a maximum of 14 generations out of 300, with the average top individuals lasting only 1 generation. This made predictions on when the best individuals would be born nearly impossible as the coevolutionary landscape changed quite rapidly and non-deterministically. The kinds of MTM strategies selected by both bugs and tests depended on the level of complexity of the input programs. For input programs that had negative polynomials, the best bugs opted to delete the program entirely and build a completely new tree, whereas the best tests were unable to select viable specialized strategies to repair such programs. For programs that had large polynomial degrees, bugs opted for strategies that added nodes their underlying GP tree, in the hopes of damaging the input program more. Tests on the other hand implemented strategies to carefully reduce the complexity of the polynomial. Tests however, frequently overcompensated when attempting to fix the fit bugs, leading to mediocre solutions.