• English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
  • Communities & Collections
  • Browse OpenUCT
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
  1. Home
  2. Browse by Author

Browsing by Author "Nitschke, Geoff Stuart"

Now showing 1 - 13 of 13
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Accelerated cooperative co-evolution on multi-core architectures
    (2018) Moyo, Edmore; Kuttel, Michelle; Nitschke, Geoff Stuart
    The Cooperative Co-Evolution (CC) model has been used in Evolutionary Computation (EC) to optimize the training of artificial neural networks (ANNs). This architecture has proven to be a useful extension to domains such as Neuro-Evolution (NE), which is the training of ANNs using concepts of natural evolution. However, there is a need for real-time systems and the ability to solve more complex tasks which has prompted a further need to optimize these CC methods. CC methods consist of a number of phases, however the evaluation phase is still the most compute intensive phase, for some complex tasks taking as long as weeks to complete. This study uses NE as a test case study and we design a parallel CC processing framework and implement the optimized serial and parallel versions using the Go programming language. Go is a multi-core programming language with first-class constructs, channels and goroutines, that make it well suited to parallel programming. Our study focuses on Enforced Subpopulations (ESP) for single-agent systems and Multi-Agent ESP for multi-agent systems. We evaluate the parallel versions in the benchmark tasks; double pole balancing and prey-capture, for single and multi-agent systems respectively, in tasks of increasing complexity. We observe a maximum speed-up of 20x for the parallel Multi-Agent ESP implementation over our single core optimized version in the prey-capture task and a maximum speedup of 16x for ESP in the harder version of double pole balancing task. We also observe linear speed-ups for the difficult versions of the tasks for a certain range of cores, indicating that the Go implementations are efficient and that the parallel speed-ups are better for more complex tasks. We find that in complex tasks, the Cooperative Co-Evolution Neuro-Evolution (CCNE) methods are amenable to multi-core acceleration, which provides a basis for the study of even more complex CC methods in a wider range of domains.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    APIC: A method for automated pattern identification and classification
    (2017) Goss, Ryan Gavin; Nitschke, Geoff Stuart
    Machine Learning (ML) is a transformative technology at the forefront of many modern research endeavours. The technology is generating a tremendous amount of attention from researchers and practitioners, providing new approaches to solving complex classification and regression tasks. While concepts such as Deep Learning have existed for many years, the computational power for realising the utility of these algorithms in real-world applications has only recently become available. This dissertation investigated the efficacy of a novel, general method for deploying ML in a variety of complex tasks, where best feature selection, data-set labelling, model definition and training processes were determined automatically. Models were developed in an iterative fashion, evaluated using both training and validation data sets. The proposed method was evaluated using three distinct case studies, describing complex classification tasks often requiring significant input from human experts. The results achieved demonstrate that the proposed method compares with, and often outperforms, less general, comparable methods designed specifically for each task. Feature selection, data-set annotation, model design and training processes were optimised by the method, where less complex, comparatively accurate classifiers with lower dependency on computational power and human expert intervention were produced. In chapter 4, the proposed method demonstrated improved efficacy over comparable systems, automatically identifying and classifying complex application protocols traversing IP networks. In chapter 5, the proposed method was able to discriminate between normal and anomalous traffic, maintaining accuracy in excess of 99%, while reducing false alarms to a mere 0.08%. Finally, in chapter 6, the proposed method discovered more optimal classifiers than those implemented by comparable methods, with classification scores rivalling those achieved by state-of-the-art systems. The findings of this research concluded that developing a fully automated, general method, exhibiting efficacy in a wide variety of complex classification tasks with minimal expert intervention, was possible. The method and various artefacts produced in each case study of this dissertation are thus significant contributions to the field of ML.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Automated Ligand Design in Simulated Molecular Docking - Optimising ligand binding affinity through the application of deep Q-learning to docking simulations
    (2022) Maccallum, Robert; Nitschke, Geoff Stuart
    The drug discovery process broadly follows the sequence of high-throughput screening, optimisation, synthesis, testing, and finally, clinical trials. We investigate methods for accelerating this process with machine learning algorithms that can automatically design novel ligands for biological targets. Recent work has demonstrated the viability of deep reinforcement learning, generative adversarial networks and auto-encoders. Here, we extend state-of-the-art deep reinforcement learning molecular modification algorithms and, through the integration of molecular docking simulations, apply them to automatically design novel antagonists for the adenosine triphosphate binding site of Plasmodium falciparum phosphatidylinositol 4-kinase, an enzyme essential to the malaria parasite's development within an infected host.
  • No Thumbnail Available
    Item
    Open Access
    Design problem optimization with multi-objective evolutionary algorithms
    (2025) Toma, Farzana Haque; Nitschke, Geoff Stuart
    Complex design challenges involve conflicting objectives and require robust optimization techniques. They commonly arise in engineering, building design, robotics, drug design, and energy systems, among others, where balancing competing criteria is essential. Sunshade optimization is also a complex design problem as it has many conflicting objectives. Sunshades significantly influence a building's thermal performance, daylight quality, occupant comfort, and energy usage. However, traditional sunshade designs typically focus on a limited set of objectives, often ignoring broader considerations such as cost efficiency and outside-view obstruction. This thesis addresses that gap by implementing and comparing two advanced multi-objective evolutionary algorithms—Multi-Objective Covariance Matrix Adaptation Evolution Strategy (MOCMA-ES) and the Non-Dominated Sorting Genetic Algorithm II (NSGA-II)—to optimize sunshades across five key objectives: thermal comfort, energy consumption, Useful Daylight Illuminance (UDI), cost, and outside-view obstruction. A single-room office model was used as a test bed, with parameterized sunshades simulated through Honeybee, EnergyPlus, and Radiance. Experiments were conducted in four distinct climate zones—Cape Town (moderate), Nairobi (hot), Colombo (hothumid), and Oslo (cold)—to ensure broad applicability. Both algorithms consistently outperformed traditional, manually designed sunshades in reducing thermal discomfort and energy usage while also improving UDI. Gains in cost and view preservation were more modest, primarily because minimal overhang sunshades can already be inexpensive and unobtrusive. Statistical tests indicated no systematic performance advantage of one algorithm over the other; NSGA-II tended to produce larger Pareto fronts, whereas MOCMA-ES explored a broader range of objective values. The main contribution of this research is the use of two advanced multi-objective evolutionary algorithms to optimize sunshade designs based on five key objectives, tested in four climate zones representing both the northern and southern hemispheres, as well as regions below and above the equator, demonstrating clear advantages over traditional, manually designed sunshades in achieving a balanced trade-off among competing performance criteria.
  • No Thumbnail Available
    Item
    Open Access
    Evolution of sun-shades outside building facades
    (2023) Coetzee, Leon; Nitschke, Geoff Stuart
    The research objective behind this study is to compare ‘traditional' architectural sun-shades with evolved sun- shades to determine which best blocks direct sunlight from entering a window. Two geographical locations are tested along with two fa ̧cade conditions for each. The sun path on the summer solstice provides the projected sun rays, measured every fifteen seconds. The sun-shades are made up of points in 3D-space that form a ‘point cloud'. The points can be connected to form a surface and from there a geometric form. An Evolutionary Strategy, using self-adaptation, evolves the points within the point cloud to generate the sun- shade. Fitness for each point is determined by the number of sun rays the point can block from striking the win- dow surface; furthermore, the point may not obstruct the view from the window given certain conditions. The mean fitness for ten ‘traditional' architectural sun-shade solutions represented as point clouds, is compared to the mean fitness of the evolved sun-shade point cloud. This study provides two contributions to this field; firstly it provides a method to measure the fitness of ‘tra- ditional' sun-shades solutions and compares them with evolved solutions, secondly it provides a form for the so- lution. Architecturally, the form the evolved sun-shade takes becomes interesting. Finally, some possible improvements and modifications are further discussed.
  • No Thumbnail Available
    Item
    Open Access
    Exploring the impact of novelty and objective-directed evolution in company with MAP-Elites and HyperNEAT
    (2025) Breytenbach, Jeremy; Nitschke, Geoff Stuart
    Collective robotics refers to the field of robotics that focuses on the coordination and collaboration of multiple agents to perform a task or solve a problem. The ability to automatically design controllers for such agents in a collective system is an attractive proposition. In this thesis we investigate the impact on performance of combining MAP-Elites with HyperNEAT while varying the evolutionary search directive between an objective and non-objective search, and a hybrid approach. Objective search refers to evolutionary algorithms that explicitly optimize a predetermined performance metric, whereas nonobjective search refers to evolutionary approaches that primarily focuses on exploration and diversity within the search space. HyperNEAT is an evolutionary method that makes use of indirect encoding to evolve agents. Whereas in typical evolutionary methods, only the fittest agents survive to future generations, the inclusion of MAP-Elites allows not only the fittest agents but also those that demonstrate unique behaviour to survive (the elites). MAP-Elites is referred to as an illumination algorithm because by retaining these elite agents in the population, we expect to increase the chances of exploring and thus illuminating novel, yet potentially high-performing regions of the search space. To evaluate these methods, we use Keep-away, a simulated collective robotics task within the RoboCup football framework as a case study. In Keep-away, a team of "keeper" robots attempt to maintain possession of the football while opposing "taker" robots try to intercept it. For this study, we produced controllers for the keeper agents. This research report sheds light on how the combination of these methods affects the agents' performance and their ability explore the behaviour search space. The insights gained from this study will be valuable for researchers working to understand the value and applicability of combining illumination algorithms such as MAP-Elites with objective and non-objective search for gaining performance in Keep-away and similar tasks.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Gaining Perspective with an Evolutionary Cognitive Architecture for Intelligent Agents
    (2022) Jones, David Griffin; Nitschke, Geoff Stuart
    This thesis targets the boundary surrounding the creation of strong AI using AutoML (Automatic Machine Learning) through the development of a general cognitive architecture called Brain Evolver. To do this, the notion of what intelligence is in the context of machines and how it can practically be applied to physical intelligent agents is explored. Some core components that make up what a potentially strong AI system must possess are identified and outlined as basic task completion, exploration, scalability, noise reduction, generalization, memory, and credit-assignment. A wide set of tests that target these components are used to test the general capabilities of Brain Evolver as well as some more high-level tests that abstractly simulate space rover mission tasks. The notion of perspective and how it pertains to solving problems using appropriate levels of generalisation and historical information without explicitly storing all memory is also a subtle focus. Brain Evolver was developed using hypothetical reasoning from the literature reviewed and uses a modular design. All modules are implemented with evolutionary approaches and include Deep Neural Evolution, Relevance Estimation and Value Calibration of Evolutionary Algorithm Parameters, Meta Learning Shared Hierarchies, Attention, Spiking Neural Networks, and Guided Epsilon Exploration (a novel method). The relevance of these components in different combinations are analysed in the varying contexts of each test environment in order to gain insights and contribute to the body of evolutionary research targeted towards general problem solvers. The predictions made regarding the effect each module would have on each type of task proved to be unreliable and the program struggled with efficiency. However, Brain Evolver was still able to successfully and adequately solve all but one of the test environments in a completely autonomous way.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Machine learning in predictive analytics on judicial decision-making
    (2021) Pienaar, Celia; Nitschke, Geoff Stuart
    Legal professionals globally are under pressure to provide ‘more for less' – not an easy challenge in the era of big data, increasingly complex regulatory and legislative frameworks and volatile financial markets. Although largely limited to information retrieval and extraction, Machine Learning applications targeted at the legal domain have to some extent become mainstream. The startup market is rife with legal technology providers with many major law firms encouraging research and development through formal legal technology incubator programs. Experienced legal professionals are expected to become technologically astute as part of their response to the ‘more for less' challenge, while legal professionals on track to enter the legal services industry are encouraged to broaden their skill sets beyond a traditional law degree. Predictive analytics applied to judicial decision-making raise interesting discussions around potential benefits to the general public, over-burdened judicial systems and legal professionals respectively. It is also associated with limitations and challenges around manual input required (in the absence of automatic extraction and prediction) and domain-specific application. While there is no ‘one size fits all' solution when considering predictive analytics across legal domains or different countries' legal systems, this dissertation aims to provide an overview of Machine Learning techniques which could be applied in further research, to start unlocking the benefits associated with predictive analytics on a greater (and hopefully local) scale.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Neuro-evolution behavior transfer for collective behavior tasks
    (2018) Didi, Sabre Z; Nitschke, Geoff Stuart
    The design of effective, robust and autonomous controllers for multi-agent and multi-robot systems is a long-standing problem in the fields of computational intelligence and robotics. Whilst nature-inspired problem-solving techniques such as reinforcement learning (RL) and evolutionary algorithms (EA) are often used to adapt controllers for solving such tasks, the complexity of such tasks increases with the addition of more agents (or robots) in difficult environments. This is due to specific issues related to task complexity, such as the curse of dimensionality and bootstrapping problems. Despite an increasing attempt over the last decade to incorporate behavior (knowledge) transfer in machine learning so that relevant behavior acquired in previous learning experiences can be used to boost task performance in complex tasks, using evolutionary algorithms to facilitate behavior transfer (especially multi-agent behavior transfer) has received little attention. It remains unclear how behavior transfer addresses issues such as the bootstrapping problem in complex multi-agent tasks (for example, RoboCup soccer). This thesis seeks to investigate and establish the essential features constituting effective and efficient evolutionary search to augment behavior transfer for boosting the quality of evolved behaviors across increasingly complex tasks. Experimental results indicate a hybrid of objective-based search and behavioral diversity maintenance in evolutionary controller design coupled with behavior transfer yields evolved behaviors of significantly high quality across increasingly complex multi-agent tasks. The evolutionary controller design method thus addresses the bootstrapping task for the given range of multi-agent tasks, whilst comparative controller design methods yield scant performance results.
  • No Thumbnail Available
    Item
    Open Access
    Self-adapting simulated artificial societies
    (2023) Gower-Winter, Brandon; Nitschke, Geoff Stuart
    Agent-Based Models (ABM) are computational models that utilize autonomous agents to interact and adapt to the environments in which they occupy. They are used in fields ranging from Economics to Ecology. More recently, ABM are being used in Computational Archaeology to aid in explaining the complex social phenomena that gave rise to ancient societies all over the world. Despite their potential, ABM are limited by the fact their agents are rarely adaptive despite adaptability often touted as one of Agent-Based Modelling's greatest strengths. In this work we remedy this by investigating whether Machine Learning (ML) algorithms can be used as adaptive mechanisms for Agent-based Models simulating complex social phenomena. We aim to do this by comparing ML agents, developed using Reinforcement Learning and two Evolutionary Algorithms as adaptive-mechanisms, to rule-based agents typically found in contemporary literature. To achieve this, we create NeoCOOP, an Agent-Based Model designed to simulate the complex social phenomena that arise from resource sharing agents in ancient societies. By conducting scenario experimentation, we examined the adaptive capacity of our four agent-types by measuring their ability to maintain both population and resources levels in a virtual re-creation of Ancient Egypt during the Predynastic Period. Our results indicate that our ML agents (Utility and IE) perform better or on par with even complex rule-based agents (Traditional and RBAdaptive). The IE agent-type ranked first and was the most adaptive agent-type. The Utility and RBAdaptive agents jointly ranked second and the Traditional agent ranked last. Overall, the findings of this work clearly show that adaptive-agents are more suited to modelling the dynamics of complex environments than their rule-based counterparts. More specifically, our results demonstrate that ML algorithms are particularly well suited as these adaptive mechanisms given that they not only allowed our agents to maintain high population and resource levels, they facilitated the emergence of additional emergent phenomena such as resource acquisition strategy specialization. It is our hope that the findings presented in this work pushes the state of the art such that future research endeavours seek to use truly adaptive-agents in their complex Archaeological ABM
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Spectral analysis of neutral evolution
    (2017) Shorten, David; Nitschke, Geoff Stuart; Eiben, Agoston
    It has been argued that much of evolution takes place in the absence of fitness gradients. Such periods of evolution can be analysed by examining the mutational network formed by sequences of equal fitness, that is, the neutral network. It has been demonstrated that, in large populations under a high mutation rate, the population distribution over the neutral network and average mutational robustness are given by the principal eigenvector and eigen- value, respectively, of the network's adjacency matrix. However, little progress has been made towards understanding the manner in which the topology of the neutral network influences the resulting population distribution and robustness. In this work, we build on recent results from spectral graph theory and utilize numerical methods to enhance our understanding of how populations distribute themselves over neutral networks. We demonstrate that, in the presence of certain topological features, the population will undergo an exploration catastrophe and become confined to a small portion of the network. We further derive approximations, in terms of mutational biases, for the population distribution and average robustness in networks with a homogeneous structure. The applicability of these results is explored, first, by a detailed review of the literature in both evolutionary computing and biology concerning the structure of neutral networks. This is extended by studying the actual and predicted population distribution over the neutral networks of H1N1 and H3N2 influenza haemagglutinin during seasons between 2005 and 2016. It is shown that, in some instances, these populations experience an exploration catastrophe. These results provide insight into the behaviour of populations on neutral networks, demonstrating that neutrality does not necessarily lead to an exploration of genotype/phenotype space or an associated increase in population diversity. Moreover, they provide a plausible explanation for conflicting results concerning the relationship between robustness and evolvability.
  • No Thumbnail Available
    Item
    Open Access
    The impact of behavioural diversity in the evolution of multi-agent systems robust to dynamic environments
    (2023) Hallauer, Scott; Nitschke, Geoff Stuart
    Behavioural diversity has been shown to be beneficial in biological social systems, such as insect colonies and human societies, as well as artificial systems such as large-scale swarm robotics applications. Evolutionary swarm robotics is a popular experimental platform for demonstrating the emergence of various social phenomena and collective behaviour, including behavioural diversity and specialisation. However, from an automated design perspective, the evolutionary conditions necessary to synthesise optimal collective behaviours that function across increasingly complex environments remains unclear. Thus, we introduce a comparative study of behavioural diversity maintenance methods (based on the MAP-Elites algorithm) versus those without behavioural diversity mechanisms (based on the steady-state genetic algorithm), as a means to evolve suitable degrees of behavioural diversity over increasingly difficult collective behaviour tasks. For this purpose, a collective sheep-dog herding task is simulated which requires the evolved robots (dogs) to capture a dispersed flock of agents (sheep) in a target zone. Different methods for evolving both homogeneous and heterogeneous swarms are investigated, including a novel approach for optimising swarm allocations of pre-evolved, behaviourally diverse controllers. In support of previous work, experiment results demonstrate that behavioural diversity can be generated without specific speciation mechanisms or geographical isolation in the task environment. Furthermore, we exhibit significantly improved task performance for heterogeneous swarms generated by our novel allocation evolution approach, when compared with separate homogeneous swarms using identical controllers. The introduction of this multi-step method for evolving swarm-controller allocations represents the major contribution of this work.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    The performance of coevolutionary topologies in developing competitive tree manipulation strategies for symbolic regression
    (2020) Ombura, Martin; Nitschke, Geoff Stuart
    Computer bugs and tests are antagonistic elements of the software development process, with the former attempting to corrupt a program and the latter aiming to identify and fix the introduced faults. The automation of bug identification and repair schemes through automated software testing is an area of research that has only seen success in niche areas of software development but has failed to progress into general areas of computing due to the complexity and diversity of programming languages, codebases and developer coding practices. Unlike traditional engineering fields such as mechanical or civil where project specifications are carefully outlined and built towards, software engineering suffers from a lack of global standardization required to “build from a spec”. In this study we investigate a coevolutionary spec-based approach to dynamically damage and repair programs mathematical programs (functions). We opt for mathematical functions instead of software due to their functional similarities and simpler syntax and semantics. We utilize symbolic regression (SR) as a framework to analyze the error maximized by bugs and minimized by test. We adopt a hybrid evolutionary algorithm (EA) that implements the tree based phenotypic structure of genetic programming (GP) and the list-based chromosome of genetic algorithm (GA) that permits embedding of mathematical tree manipulation (MTM) strategies, as well as adequate selection mechanisms for search. Bugs utilize the MTM strategies in their chromosome to manipulate the input program (IP) with the aim of maximizing the error while tests adopt a set of their own MTM strategies to repair the damaged program using a spec generated from the IP to guide the repair process. Both adversarial agents are investigated in four common coevolutionary topologies, Hall of Fame (HoF), K-Random Tournaments (KRT), Round Robin (RR) and Single Elimination Tournament (SET). We ran 1556 simulations each generating a random polynomial that the bugs and tests would have to contend over in all 4 topologies. We observed that KRT with a low k value of 5 performs best from a computational and fitness standpoint for all bugs and tests. Bugs were dominant in nearly all topologies for all polynomial complexities, whereas tests struggled in the HoF, RR and SET topologies as the input programs became more complex. The competitive landscape however was quite chaotic with the best individuals lasting a maximum of 14 generations out of 300, with the average top individuals lasting only 1 generation. This made predictions on when the best individuals would be born nearly impossible as the coevolutionary landscape changed quite rapidly and non-deterministically. The kinds of MTM strategies selected by both bugs and tests depended on the level of complexity of the input programs. For input programs that had negative polynomials, the best bugs opted to delete the program entirely and build a completely new tree, whereas the best tests were unable to select viable specialized strategies to repair such programs. For programs that had large polynomial degrees, bugs opted for strategies that added nodes their underlying GP tree, in the hopes of damaging the input program more. Tests on the other hand implemented strategies to carefully reduce the complexity of the polynomial. Tests however, frequently overcompensated when attempting to fix the fit bugs, leading to mediocre solutions.
UCT Libraries logo

Contact us

Jill Claassen

Manager: Scholarly Communication & Publishing

Email: openuct@uct.ac.za

+27 (0)21 650 1263

  • Open Access @ UCT

    • OpenUCT LibGuide
    • Open Access Policy
    • Open Scholarship at UCT
    • OpenUCT FAQs
  • UCT Publishing Platforms

    • UCT Open Access Journals
    • UCT Open Access Monographs
    • UCT Press Open Access Books
    • Zivahub - Open Data UCT
  • Site Usage

    • Cookie settings
    • Privacy policy
    • End User Agreement
    • Send Feedback

DSpace software copyright © 2002-2026 LYRASIS