• English
  • ÄŚeština
  • Deutsch
  • Español
  • Français
  • GĂ idhlig
  • Latviešu
  • Magyar
  • Nederlands
  • PortuguĂŞs
  • PortuguĂŞs do Brasil
  • Suomi
  • Svenska
  • TĂĽrkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
  • Communities & Collections
  • Browse OpenUCT
  • English
  • ÄŚeština
  • Deutsch
  • Español
  • Français
  • GĂ idhlig
  • Latviešu
  • Magyar
  • Nederlands
  • PortuguĂŞs
  • PortuguĂŞs do Brasil
  • Suomi
  • Svenska
  • TĂĽrkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
  1. Home
  2. Browse by Author

Browsing by Author "Meyer, Thomas"

Now showing 1 - 11 of 11
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Adoption of ICT4D frameworks to support screening for depression in Nigerian universities
    (2018) Ojeme, Blessing Onuwa; Meyer, Thomas; Mbogho, Audrey
    Health is fundamental to development and access to healthcare is a major health and development issue particularly in developing countries where preventable diseases and premature deaths still inflict a high toll. In Nigeria, for instance, under-financing, inefficient allocation of limited medical resources has led to quantitative and qualitative deficiencies in depression identification, and to growing gaps in facility and equipment upkeep. The focus of the present study is Nigerian University students who are at higher risk of clinical depression than other populations. Besides high crime rate, acute unemployment, terrorism, extreme poverty and serial outbreak of diseases, which are everyday life situations that trigger depression for a large proportion of Nigerian population, Nigerian University students are faced with additional problems of poor living and academic conditions. These include constant problems of accommodation and overcrowded lecture halls caused by increasing population of students, recurrent disruptions of academic calendar, heavy cigarette smoking and high level of alcohol consumption. Effective prevention of medical condition and access to healthcare resources are important factors that affect peoples’ welfare and quality of life. Regular assessment for depression has been suggested as the first important steps to its early detection and prevention. Investigations revealed that, besides the peculiar shortage in mental health professionals in Nigeria, the near absence of modern diagnostic facilities has made the management of this potentially detrimental problem impossible. Given this national health problem, and that it would take some time before resources, especially human, can be mustered, calls have been made by several bodies that other viable means that take cognisance of the difficulties of assessing mental healthcare be sought. This study is an attempt at exploring opportunities to increase flexibility in depression prevention and detection processes. The study investigated the effectiveness of developing computer-based methodologies, derived from machine learning and human computer interaction techniques for guiding depression identification process in Nigerian universities. Probabilistic Bayesian networks was used to construct models from real depression datasets that included 1798 data instances, collected from the mental health unit of University of Benin Teaching Hospital (UBTH) and primary care centre in Nigeria. The models achieved high performance on standard metrics, including: 94.3% accuracy, 94.4% precision, 0.943 F-Measure, 0.150 RSME, 0.923 R and 92.2% ROC. The findings from the information gain and mutual information show high correlation between “depression” and “alcohol or other drug consumption”; “depression” and family support and availability of accommodation”, but low correlation between “depression” and “cigarette smoking”. The results also show high correlation between “depression” and a synergistic combination of “impaired function and alcohol and other drug consumption”. Following the User-Centered design approach, a desktop-based screening tool was developed for use by University academic staff, as a first step, for regular screening of staff and students for depression, and where necessary, schedule appointment with the appropriate mental health authority for further diagnosis. Though the interesting results from the heuristic evaluations illuminate the challenges involved, it demonstrates the significance and relevance of end-user factors in the process of designing computer-aided screening intervention, especially with respect to acceptance of the system for use in non-clinical environment. The findings presented in this doctoral study provide compelling evidence of the huge potential that the collaboration of machine learning and usability techniques has for complementing available resources in the management of depression among University population in Nigeria. It is hoped that, given the persistent challenges of depression, the findings will be part of the ongoing global research to encourage the adoption of ICT4D frameworks for the prevention of more serious cases by empowering other population for an early first-line depression screening.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    An analysis of cybersecurity culture in an organisation managing Critical Infrastructure
    (2021) Parbhunath, Abraham; Meyer, Thomas; Leenen, Louise
    The 4th industrial revolution (4IR) is transforming the way businesses operate, making them more efficient and data-driven while also increasing the threat-landscape brought on by the convergence of technologies and increasingly so for organisations managing critical infrastructure. Environments that traditionally operated entirely independent of networks and the internet are now connecting in ways that are exposing critical infrastructure to a new level of cyber-risks that now need to be managed. Due to the stable nature of technologies and knowledge in traditional industrial environments, there is a misalignment of skills to emerging technology trends. Globally cyber-crime attacks are on the rise with Cisco reporting in 2018 that 31% of all respondents had seen a cyber-attack in their operational environment[1]. With up to 67% of breaches reported in the Willis Towers report due to employee negligence [2], the importance of cybersecurity culture is no longer in question in organisations managing critical infrastructure. Developing an understanding of the drivers for behaviours, attitudes and beliefs related to cybersecurity and aligning these to an organisations risk appetite and tolerance is crucial to managing cyber-risk. There is a very divergent understanding of cyber-risk in the engineering environment. This study endeavours to investigate employee perceptions, attitudes and values associated with cybersecurity and how these potentially affects their behaviour and ultimately the risk to the plant or organisation. Most traditional culture questionnaires focus on information security with observations focussing more on social engineering, email hygiene and physical controls. This cybersecurity culture study was conducted to gain insight into people's beliefs, attitudes and behaviours related to cybersecurity encompassing people, process and technology focussing on the operational technology environment in Eskom1. Both technical (Engineering and IT) and nontechnical (business support staff) staff were questionnaireed. The questionnaire was categorised into four sections dealing with cybersecurity culture as they relate to individuals, processes and technology, leadership and the organisation at large. The results from the analysis, revealed that collaboration, information sharing, reporting of vulnerabilities, high dependence and trust in technology, leadership commitment, vigilance, compliance, unclear processes and lack of understanding around cybersecurity all contribute to the current levels of cybersecurity culture. Insights from this study will generate recommendations that will form part of a cybersecurity culture transformation journey.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    An overview of KLM-style defeasible entailment
    (2020) Kaliski, Adam; Meyer, Thomas
    The usage of formal logic to solve problems in artificial intelligence has a long history in the field. Information is represented in a formal language, which facilitates algorithmic reasoning about some domain knowledge. Traditionally, the algorithms used for the reasoning services are monotonic, which states that adding knowledge never causes the retraction of an inference. A result of this is that if the knowledge in question contains examples that are exceptions to stated rules, then the entire knowledge base may become unsatisfiable. If the knowledge accurately represents the domain, then such a result is undesirable. One solution is nonmonotonic reasoning, which encompasses patterns of defeasible or “common sense” reasoning that may retract conclusions upon the addition of new information to a knowledge base. One of the most prominent frameworks for nonmonotonic reasoning is the one defined by Kraus, Lehmann, and Magidor (KLM). The KLM framework has very desirable features both for theoretical study of nonmonotonic reasoning, as well as for implementation in AI applications. However, the current state of the KLM framework spans numerous papers over two decades of research. This provides a challenge for new researchers to understand the current problems being studied, as well as to understand the framework well enough to either extend it or apply it. This dissertation aims to compile the theoretical work done in this framework to provide a single point of reference for anyone wishing to understand the KLM framework, as well as to know how to define a defeasible entailment relation, using homogenised terminology and notation that is now typical of the field. Firstly, the propositional logic used as the base language will be defined. Then, paralleling the way the framework was historically built up, a preferential semantics over that language will be described, before modifying the language itself with a defeasible connective, and introducing a nonmonotonic entailment relation over such a language. Then, recent extensions to this framework defining various classes of defeasible entailment are described. By the end of this dissertation, the reader should have a well rounded understanding of the KLM framework, from classical logic to defeasible logic.
  • No Thumbnail Available
    Item
    Open Access
    Defeasible justification for the KLM Framework
    (2023) Wang, Shun; Meyer, Thomas; Moodley Deshendran
    Knowledge Representation (KR) and Reasoning are essential aspects of Artificial Intelligence (AI) as they allow AI systems to conduct logical reasoning. Most classical logics, such as Propositional Logic (PL), are monotonic, which means that adding new knowledge to a knowledge base cannot cause the retraction of a previously drawn conclusion. These classical logics cannot easily handle exceptions to typical scenarios. Defeasible reasoning is a type of non-monotonic reasoning, which allows the notion of “defeasible implication”. The Kraus, Lehmann, and Magidor (KLM) Framework is an extension of PL that can perform defeasible reasoning. The results of defeasible reasoning using the KLM Framework are often challenging to understand. Therefore, one needs a framework to justify conclusions drawn from defeasible reasoning. We propose a theoretical framework for defeasible justification using the KLM Framework and a software tool that implements the framework. The theoretical framework is based on an existing theoretical framework for Description Logic (DL) which we translate to PL. The defeasible justification algorithm uses the statement ranking required by the KLM-style form of defeasible entailment, known as rational closure. Classical justifications are computed based on materialised formulas (classical counterparts of defeasible formulas). The resulting classical justifications are converted to defeasible justifications based on the input knowledge base. We provide a software tool with a graphical user interface (GUI) that implements the algorithm. Given a defeasible knowledge base and a query, such that the knowledge base defeasibly entails the query, the program produces a set of justifications for the defeasible entailment. We use a set of representative examples to evaluate the defeasible justification algorithm and argue that its results conform to intuition. The same examples are used to confirm the correctness of the algorithm implementation.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    DevelopinThe Bayesian Description Logic BALC
    (2018) Botha, Leonard; Meyer, Thomas; Peñaloza, Rafael
    Description Logics (DLs) that support uncertainty are not as well studied as their crisp alternatives. This limits their application in many real world domains, which often require reasoning about uncertain or contradictory information. In this thesis we present the Bayesian Description Logic BALC, which takes existing work on Bayesian Description Logics and applies it to the classical Description Logic ALC. We define five reasoning problems for BALC; two versions of concept satisfiability (called total and partial respectively), knowledge base consistency, three subsumption problems (positive subsumption, p-subsumption, exact subsumption), instance checking, and the most likely context problem. Consistency, satisfiability, and instance checking have not previously been studied in the context of contextual Bayesian DLs and as such this is new work. We then go on to provide algorithms that solve all of these reasoning problems, with the exception of the most likely context problem. We found that all reasoning problems in BALC are in the same complexity class as their classical variants, provided that the size of the Bayesian Network is included in the size of the knowledge base. That is, all reasoning problems mentioned above (excluding most likely context) are exponential in the size of the knowledge base and the size of the Bayesian Network.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Enriching deontic logic with typicality
    (University of Cape Town, 2020) Chingoma, Julian; Meyer, Thomas
    Legal reasoning is a method that is applied by legal practitioners to make legal decisions. For a scenario, legal reasoning requires not only the facts of the scenario but also the legal rules to be enforced within it. Formal logic has long been used for reasoning tasks in many domains. Deontic logic is a logic which is often used to formalise legal scenarios with its built-in notions of obligation, permission and prohibition. Within the legal domain, it is important to recognise that there are many exceptions and conflicting obligations. This motivates the enrichment of deontic logic with not only the notion of defeasibility, which allows for reasoning about exceptions, but a stronger notion of typicality which is based on defeasibility. KLM-style defeasible reasoning introduced by Kraus, Lehmann and Magidor (KLM), is a logic system that employs defeasibility while a logic that serves the same role for the stronger notion of typicality is Propositional Typicality Logic (PTL). Deontic paradoxes are often used to examine deontic logic systems as the scenarios arising from the paradoxes' structures produce undesirable results when desirable deontic properties are applied to the scenarios. This is despite the various scenarios themselves seeming intuitive. This dissertation shows that KLM-style defeasible reasoning and PTL are both effective when applied to the analysis of the deontic paradoxes. We first present the background information which comprises propositional logic, which forms the foundation for the other logic systems, as well as the background of KLM-style defeasible reasoning, deontic logic and PTL. We outline the paradoxes along with their issues within the presentation of deontic logic. We then show that for each of the two logic systems we can intuitively translate the paradoxes, satisfy many of the desirable deontic properties and produce reasonable solutions to the issues resulting from the paradoxes.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Evaluation of clustering techniques for generating household energy consumption patterns in a developing country
    (2019) Toussaint, Wiebke; Moodley, Deshen; Meyer, Thomas
    This work compares and evaluates clustering techniques for generating representative daily load profiles that are characteristic of residential energy consumers in South Africa. The input data captures two decades of metered household consumption, covering 14 945 household years and 3 295 848 daily load patterns of a population with high variability across temporal, geographic, social and economic dimensions. Different algorithms, normalisation and pre-binning techniques are evaluated to determine the best clustering structure. The study shows that normalisation is essential for producing good clusters. Specifically, unit norm produces more usable and more expressive clusters than the zero-one scaler, which is the most common method of normalisation used in the domain. While pre-binning improves clustering results for the dataset, the choice of pre-binning method does not significantly impact the quality of clusters produced. Data representation and especially the inclusion or removal of zero-valued profiles is an important consideration in relation to the pre-binning approach selected. Like several previous studies, the k-means algorithm produces the best results. Introducing a qualitative evaluation framework facilitated the evaluation process and helped identify a top clustering structure that is significantly more useable than those that would have been selected based on quantitative metrics alone. The approach demonstrates how explicitly defined qualitative evaluation measures can aid in selecting a clustering structure that is more likely to have real world application. To our knowledge this is the first work that uses cluster analysis to generate customer archetypes from representative daily load profiles in a highly variable, developing country context
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Explanation for defeasible entailment
    (2020) Chama, Victoria; Meyer, Thomas
    Explanation facilities are an essential part of tools for knowledge representation and reasoning systems. Knowledge representation and reasoning systems allow users to capture information about the world and reason about it. They are useful in understanding entailments which allow users to derive implicit knowledge that can be made explicit through inferences. Additionally, explanations also assist users in debugging and repairing knowledge bases when conflicts arise. Understanding the conclusions drawn from logic-based systems are complex and requires expert knowledge, especially when defeasible knowledge bases are taken into account for both expert and general users. A defeasible knowledge base represents statements that can be retracted because they refer to information in which there are exceptions to stated rules. That is, any defeasible statement is one that may be withdrawn upon learning of an exception. Explanations for classical logics such as description logics which are well-known formalisms for reasoning about information in a given domain are provided through the notion of justifications. Simply providing or listing the statements that are responsible for an entailment in the classical case is enough to justify an entailment. However, when looking at the defeasible case where entailed statements can be retracted, this is not adequate because the way in which entailment is performed is more complicated than the classical case. In this dissertation, we combine explanations with a particular approach to dealing with defeasible reasoning. We provide an algorithm to compute justification-based explanations for defeasible knowledge bases. It is shown that in order to accurately derive justifications for defeasible knowledge bases, we need to establish the point at which conflicts arise by using an algorithm to come up with a ranking of defeasible statements. This means that only a portion of the knowledge is considered because the statements that cause conflicts are discarded. The final algorithm consists of two parts; the first part establishes the point at which the conflicts occur and the second part uses the information obtained from the first algorithm to compute justifications for defeasible knowledge bases.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    KLM-Style Defeasible Reasoning for Datalog
    (2022) Paterson-Jones, Guy; Meyer, Thomas; Casini, Giovanni
    In many problem domains, particularly those related to mathematics and philosophy, classical logic has enjoyed great success as a model of valid reasoning and discourse. For real-world reasoning tasks, however, an agent typically only has partial knowledge of its domain, and at most a statistical understanding of relationships between properties. In this context, classical inference is considered overly restrictive, and many systems for non-monotonic reasoning have been proposed in the literature to deal with these tasks. A notable example is the Klm framework, which describes an agent's defeasible knowledge qualitatively in terms of conditionals of the form “if A, then typically B”. The goal of this research project is to investigate Klm-style semantics for defeasible reasoning over Datalog knowledge bases. Datalog is a declarative logic programming language, designed for querying large deductive databases. Syntactically, it can be viewed as a computationally feasible fragment of firstorder logic, so this continues a recent line of work in which the Klm framework is lifted to more expressive languages.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    (Multilingual) Knowledge representation for epistemological access
    (2018) Antia, Mary-Jane; Meyer, Thomas
    In South Africa, the inability of many students to acquire knowledge and to successfully demonstrate acquired knowledge in assessments has, in part, been attributed to their low levels of proficiency in the language of learning and teaching (which is also the language of textbooks). This dissertation investigates the possibility of using knowledge representation to enhance epistemological access for learners, specifically, enhancing understanding of school science by learners with limited proficiency in academic English. In the dissertation, sample texts from a life science textbook were modeled into entities and relations, using the conceptual graphs formalism. While the labels for the entities were in English, the links between the entities were provided in both English and in the variety of Afrikaans called Kaaps. A knowledge-based application, using Jupyter notebook and a graph database, was subsequently developed on the basis of the modeled texts. To test the impact of this graph-based resource, an experimental study was designed involving grade 10 learners in a Cape Town high school. One group was exposed to the graphically modeled content and another group was limited only to the text content. The null hypothesis formulated was as follows: there will be no difference in the performance scores of those learners who are exposed to the knowledge modeled in the application and scores of those learners exposed only to knowledge in text without the model. On six of the seven questions, the experimental group performed better than the control group. The performance of the experimental study was further verified using inferential statistics, which showed that the results were statistically significant. Given that the experimental group performed better than the control group, the null hypothesis was rejected. During interviews, participants' subjective experiences indicated that the graphically modeled knowledge allowed for a better understanding of the text. Although findings from larger studies are clearly required, the current study indicates that the implementation of graph-based knowledge systems is a promising means for intervening to enhance the understanding of English-language science textbooks by learners who may not be proficient in English.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Robust and cheating-resilient power auctioning on Resource Constrained Smart Micro-Grids
    (2018) Marufu, Mufudzi Anesu Chapman; Meyer, Thomas
    The principle of Continuous Double Auctioning (CDA) is known to provide an efficient way of matching supply and demand among distributed selfish participants with limited information. However, the literature indicates that the classic CDA algorithms developed for grid-like applications are centralised and insensitive to the processing resources capacity, which poses a hindrance for their application on resource constrained, smart micro-grids (RCSMG). A RCSMG loosely describes a micro-grid with distributed generators and demand controlled by selfish participants with limited information, power storage capacity and low literacy, communicate over an unreliable infrastructure burdened by limited bandwidth and low computational power of devices. In this thesis, we design and evaluate a CDA algorithm for power allocation in a RCSMG. Specifically, we offer the following contributions towards power auctioning on RCSMGs. First, we extend the original CDA scheme to enable decentralised auctioning. We do this by integrating a token-based, mutual-exclusion (MUTEX) distributive primitive, that ensures the CDA operates at a reasonably efficient time and message complexity of O(N) and O(logN) respectively, per critical section invocation (auction market execution). Our CDA algorithm scales better and avoids the single point of failure problem associated with centralised CDAs (which could be used to adversarially provoke a break-down of the grid marketing mechanism). In addition, the decentralised approach in our algorithm can help eliminate privacy and security concerns associated with centralised CDAs. Second, to handle CDA performance issues due to malfunctioning devices on an unreliable network (such as a lossy network), we extend our proposed CDA scheme to ensure robustness to failure. Using node redundancy, we modify the MUTEX protocol supporting our CDA algorithm to handle fail-stop and some Byzantine type faults of sites. This yields a time complexity of O(N), where N is number of cluster-head nodes; and message complexity of O((logN)+W) time, where W is the number of check-pointing messages. These results indicate that it is possible to add fault tolerance to a decentralised CDA, which guarantees continued participation in the auction while retaining reasonable performance overheads. In addition, we propose a decentralised consumption scheduling scheme that complements the auctioning scheme in guaranteeing successful power allocation within the RCSMG. Third, since grid participants are self-interested we must consider the issue of power theft that is provoked when participants cheat. We propose threat models centred on cheating attacks aimed at foiling the extended CDA scheme. More specifically, we focus on the Victim Strategy Downgrade; Collusion by Dynamic Strategy Change, Profiling with Market Prediction; and Strategy Manipulation cheating attacks, which are carried out by internal adversaries (auction participants). Internal adversaries are participants who want to get more benefits but have no interest in provoking a breakdown of the grid. However, their behaviour is dangerous because it could result in a breakdown of the grid. Fourth, to mitigate these cheating attacks, we propose an exception handling (EH) scheme, where sentinel agents use allocative efficiency and message overheads to detect and mitigate cheating forms. Sentinel agents are tasked to monitor trading agents to detect cheating and reprimand the misbehaving participant. Overall, message complexity expected in light demand is O(nLogN). The detection and resolution algorithm is expected to run in linear time complexity O(M). Overall, the main aim of our study is achieved by designing a resilient and cheating-free CDA algorithm that is scalable and performs well on resource constrained micro-grids. With the growing popularity of the CDA and its resource allocation applications, specifically to low resourced micro-grids, this thesis highlights further avenues for future research. First, we intend to extend the decentralised CDA algorithm to allow for participants’ mobile phones to connect (reconnect) at different shared smart meters. Such mobility should guarantee the desired CDA properties, the reliability and adequate security. Secondly, we seek to develop a simulation of the decentralised CDA based on the formal proofs presented in this thesis. Such a simulation platform can be used for future studies that involve decentralised CDAs. Third, we seek to find an optimal and efficient way in which the decentralised CDA and the scheduling algorithm can be integrated and deployed in a low resourced, smart micro-grid. Such an integration is important for system developers interested in exploiting the benefits of the two schemes while maintaining system efficiency. Forth, we aim to improve on the cheating detection and mitigation mechanism by developing an intrusion tolerance protocol. Such a scheme will allow continued auctioning in the presence of cheating attacks while incurring low performance overheads for applicability in a RCSMG.
UCT Libraries logo

Contact us

Jill Claassen

Manager: Scholarly Communication & Publishing

Email: openuct@uct.ac.za

+27 (0)21 650 1263

  • Open Access @ UCT

    • OpenUCT LibGuide
    • Open Access Policy
    • Open Scholarship at UCT
    • OpenUCT FAQs
  • UCT Publishing Platforms

    • UCT Open Access Journals
    • UCT Open Access Monographs
    • UCT Press Open Access Books
    • Zivahub - Open Data UCT
  • Site Usage

    • Cookie settings
    • Privacy policy
    • End User Agreement
    • Send Feedback

DSpace software copyright © 2002-2025 LYRASIS