Browsing by Subject "Computer science"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
- ItemOpen AccessThe design and implementation of an image processing system(1978) Miketinac, Jeanmary; Mitchell, HThree of the branches of image processing are touched upon herein: image capture, enhancement and display. Image capture here means 'the making available of an image for further processes to be carried out on it'. This is accomplished by the set of general purpose interrelated I/O and file handling routines for convenience called the NUCLEUS and some tape I/O application tasks. The 'further processes' are carried out by the application routines which are task specific - they perform one operation only. Therefore to obtain an image in the required form may necessitate the use of several application tasks. Chapter 2 contains an introduction to digital images and the architecture of the system, the VICAR system being used as a guideline. Chapters 3 to 5 contain more detailed descriptions of the NUCLEUS and the application tasks with the programming specifications being relegated to Appendices.
- ItemOpen AccessA fast procedure for generating random numbers by a modification of the Marsaglia-Maclaren method(1974) Ioannou, Ioannis Elias; Brundrit, GeoffMarsaglia and Maclaren combined two linear congruential generators in order to produce a pseudo random number sequence uniformly distributed in the range [0,2³⁵]. Their method is a considerable improvement compared with the primitive linear congruential method at the cost of greater generation time. In this thesis, a simple modification of the Marsaglia-Maclaren method is presented in which there is an alleviation of the increased generation time, and a slight further increase in randomness. The modified generator is tested extensively in a variety of statistical tests and simulation problems.
- ItemOpen AccessFLOW : a programming environment using diagrams(1984) Dooley, Jeffrey Walter Michael; Schach, S RA graphical language is developed as a generalization of the structured flowcharts proposed by Nessi and Shneiderman. This language can be used in the specification of procedures, procedure interfaces and data structures. A software production support environment is then developed using this language which is capable of producing systems in FORTRAN IV, COBOL and Pascal. The environment integrates new and existing tools and facilitates and encourages the design, coding and testing of well-structured systems using the methodology of stepwise refinement. A central component of the environment is a software production data base which holds the programme source as well as control information pertaining to the state of development of the system and interfaces of the various programme modules being created within the system. A helpful syntax directed editor for the graphical language is used to update the data base. Programme specifications are extracted from the data base by a number post-processors which produce target code for the required high level languages as well as system documentation. Some of the practical experience gained over a three year period is described and suggestions for the extension of the current environment and topics for future research are presented.
- ItemOpen AccessThe implementation of a front end processor for a subset of ADA(1983) Epstein, Jacqueline; MacGregor, KenADA is a high level programing language sponsored by the United States Department of Defence primarily for use in real-time systems. It has all the structures present in modern algorithmic languages with additional features for tasking. This thesis discusses the University of Cape Town implementation of a front end processor for a subset of ADA. A compiler generator package was used to construct a syntax checker for the ADA language and a subset of this was extended through the semantic analysis phase finally to produce the intermediate code - DIANA. DIANA is the standard intermediate code for all ADA programs and a representation for transfer between systems has been defined. DIANA is intended to function as an interface between the front and back ends of ADA compilers, and as an intermediate form which can be used by tools designed for ADA.
- ItemOpen AccessImplementation of a structured PL/I subset compiler(1974) Goldberg, Colin Barry; MacGregor, KenThe thesis describes the design and implementation of a PL/I subset compiler which produces a hypothetical stack code as output. The compiler was based on a Pascal compiler developed by N. Wirth and U. Amman of Eidgenössische Technische Hochschule, Zurich, and was itself written in the Pascal language. PL/I programs using the compiler can now be compiled and executed (interpretively) on the UNIVAC 1106 computer at U.C.T. The compiler was designed mainly as a teaching system. Its lesson is that structured programming is a powerful technique which facilitated its design and implementation.
- ItemOpen AccessMeasuring the efficiency of software development in a data processing environment(1982) Van der Poel, Klaas Govert; Schach, S RThe development of software for data processing systems has, during the last 25 years, grown into a large industry. Thus the efficiency of the software development process is of major importance. It is indicative of the level of understanding of this activity that no generally accepted measure of the efficiency of software development currently exists. The purpose of this study is to derive such a measure from a set of principles, to determine criteria for the acceptability of this measure, to test it according to the criteria set, and to describe inefficiencies obtained in a number of software projects. The definition of data processing software is based on the concepts of Management Information Systems. Flows, files and processes are identified as the main structural elements of such systems. A model of the software development life cycle describes these elements in detail and identifies the main resources required. A review of the literature shows that lines of code per programmer man-month is commonly proposed as a measure of efficiency of software development, but this measure is generally found to be inaccurate. In defining efficiency as the ratio of the prescribed results of a process divided by the total resources absorbed, a number of desirable properties of a practical measure of efficiency of software development are then put forward. Based on these properties a specific model is proposed which consists of the sum of flows, files and processes, divided by total project costs. Various other models are also considered. Validity and reliability are identified as the most important criteria for the acceptability of the proposed measure. Its reliability is tested in a separate experiment and found to be adequate. A field survey is set up to collect data to test its validity. The survey design chosen is a purposive sample of twenty software development projects. The main result of the survey is that the proposed model of efficiency is found to be valid. Other models investigated are less attractive. Efficiencies achieved in the twenty projects included in the sample are found to differ substantially from one another. Apart from achieving its specific objectives, the study also provides a perspective on some of the problems of software development. Several subjects for related research are identified.
- ItemOpen AccessThe semantic database model as a basis for an automated database design tool(1983) Berman, Sonia; MacGregor, KenThe automatic database design system is a design aid for network database creation. It obtains a requirements specification from a user and generates a prototype database. This database is compatible with the Data Definition Language of DMS 1100, the database system on the Univac 1108 at the University of Cape Town. The user interface has been constructed in such a way that a computer-naive user can submit a description of his organisation to the system. Thus it constitutes a powerful database design tool, which should greatly alleviate the designer's tasks of communicating with users, and of creating an initial database definition. The requirements are formulated using the semantic database model, and semantic information in this model is incorporated into the database as integrity constraints. A relation scheme is also generated from the specification. As a result of this research, insight has been gained into the advantages and shortcomings of the semantic database model, and some principles for 'good' data models and database design methodologies have emerged.
- ItemOpen AccessThe impact of behavioural diversity in the evolution of multi-agent systems robust to dynamic environments(2023) Hallauer, Scott; Nitschke, Geoff StuartBehavioural diversity has been shown to be beneficial in biological social systems, such as insect colonies and human societies, as well as artificial systems such as large-scale swarm robotics applications. Evolutionary swarm robotics is a popular experimental platform for demonstrating the emergence of various social phenomena and collective behaviour, including behavioural diversity and specialisation. However, from an automated design perspective, the evolutionary conditions necessary to synthesise optimal collective behaviours that function across increasingly complex environments remains unclear. Thus, we introduce a comparative study of behavioural diversity maintenance methods (based on the MAP-Elites algorithm) versus those without behavioural diversity mechanisms (based on the steady-state genetic algorithm), as a means to evolve suitable degrees of behavioural diversity over increasingly difficult collective behaviour tasks. For this purpose, a collective sheep-dog herding task is simulated which requires the evolved robots (dogs) to capture a dispersed flock of agents (sheep) in a target zone. Different methods for evolving both homogeneous and heterogeneous swarms are investigated, including a novel approach for optimising swarm allocations of pre-evolved, behaviourally diverse controllers. In support of previous work, experiment results demonstrate that behavioural diversity can be generated without specific speciation mechanisms or geographical isolation in the task environment. Furthermore, we exhibit significantly improved task performance for heterogeneous swarms generated by our novel allocation evolution approach, when compared with separate homogeneous swarms using identical controllers. The introduction of this multi-step method for evolving swarm-controller allocations represents the major contribution of this work.