Browsing by Department "Department of Electrical Engineering"
Now showing 1 - 20 of 1127
Results Per Page
Sort Options
- ItemOpen Access3-Phase gate-turn off thyristor inverter(1986) Kleyn, D AThe requirements of a standard 3-phase Induction Motor driven by a Voltage Source Inverter (VSI) are studied. A full 3-phase Variable Speed Drive (VSD) and its controller have been designed, constructed and tested. Gate Turn-Off Thyristors (GTO's) are used as the main switching elements in the Inverter stage of the Drive. The drive requirements of GTO's are studied in detail.
- ItemOpen AccessA 3-phase Z-source inverter driven by a novel hybrid switching algorithm(2007) Malengret, Jean-Claude; Braae, MartinA 3-phase Z-source inverter has been researched, designed, simulated, builtand tested. The purpose of the inverter is to deliver 3-phase 400 VAC from aDC supply that can vary over a range of 20 to 70 Vdc. This is done with a Zsourceinverter topology which is a single conversion method with no additionalDC to DC boost converter. A novel DSP control algorithm allows the inverter toachieve the following:· Run Space Vector Pulse Width Modulation (SV-PWM) for maximum DCbus voltage utilization while boosting the DC bus during zero space vectorstates using shoot through.· Seamless transition between modulation control and modulation / shootthrough control.· Optimised efficiency and DC bus utilisation using Hybrid Space VectorBoost Pulse Width Modulation (HSVB PWM) which is unique to thisdissertation.Such a system is particularly suited to fuel cell and particularly wind turbineapplications where the DC bus voltage is varies over a wide range resulting inthe need for a DC to DC buck/boost to regulate the DC bus to maintain a steady3-phase sinusoidal output. A further application could be for general purpose 3-phase inverter capable of operating on different DC standard bus voltages ( e.g.24, 36, 48 VDC).The benefits of a Z-source topology for the above purposes are a reduction inhigh power semi-conductor components (e.g. power MOSFET). There is also areduction in switching losses and inherent shoot through protection.Furthermore, the inverter is more robust in the sense that it is not vulnerable to spurious shoot through, which could be disastrous in the case of a traditionalvoltage fed inverter.
- ItemOpen Access3D intrawall imaging using backprojection for synthetic aperture radar (SAR)(2024) Dass, Reevelen; Paine, StephenThe Council of Scientific and Industrial Research (CSIR) has evolving synthetic aperture radar (SAR) capabilities in the C-band and the L-band. Currently, these capabilities are used to generate aerial landscape images; however, to explore the feasibility of using this technology in different environments, an experimental SAR system has been developed. This is referred to as the wall scanner. The purpose of the wall scanner is to image the interior of a wall, revealing details of the substructures inside the wall such as conduits and piping. This is done by moving the antenna system across the wall surface to create SAR images using backprojection. The radar used two different types of antennas, a log periodic dipole array (LPDA) antenna and horn antenna. The horn antenna performed well in the experiments, producing images with minimal artefacts. On the contrary, the LPDA antenna did not perform as well in the experiments and as such the characteristics of the antenna were investigated. The investigation revealed that the antenna did not function throughout the frequency range specified by its manufacturer. This produced artefacts in the image; however, some of the effects of these artefacts were minimised by a series of preprocessing techniques. A variety of preprocessing techniques were used to improve image quality. In addition to compensating for the properties of the LPDA antenna, windowing and different methods of background subtraction were used. It was difficult to compensate for the antenna issues in preprocessing; however, windowing and background subtraction had a significant effect on the images that were produced. Two postprocessing techniques were used, gradient descent optimisation based on image contrast and polarimetry. The developed gradient descent optimiser was able to automatically adjust for the system group delay based on the contrast of the image. Polarimetry post-processing revealed that the horizontally transmitted horizontally received polarisation (HH) and vertically transmitted vertically received polarisation (VV) were effective in creating images in this environment; however, cross-polarisation in the form of horizontally transmitted vertically received polarisation (HV) was not effective. The wall scanning environment that was measured consisted of scanning both drywall and brick wall. This was split into three experiments. The experiments used different materials that were placed in front of a wall, behind the wall at a distance, and directly behind the wall. The wall scanner was able to successfully create images of the three different experiments for the drywall; however, the desired results for the brick wall were not achieved. For drywall, the substructures placed directly behind the wall were more difficult to see because they were masked by the wall and its sidelobes. The materials scanned were a copper pipe, a PVC pipe, a wooden beam, and a highly reflective calibration target. The calibration target and the copper target performed well in the three experiments. The wooden beam did not perform as well; especially when placed directly behind the wall; however, it was still visible in all experiments. The PVC performed the worst and was only faintly visible in the experiments and was not visible when placed directly behind the wall.
- ItemOpen Access3D model reconstruction using photoconsistency(2007) Joubert, Kirk Michael; Nicolls, Fred; De Jager, GerhardModel reconstruction using photoconsistency refers to a method that creates a photohull, an approximate computer model, using multiple calibrated camera views of an object. The term photoconsistency refers to the concept that is used to calculate the photohull from the camera views. A computer model surface is considered photoconsistent if the appearance of that surface agrees with the appearance of the surface of the real world object from all camera viewpoints. This thesis presents the work done in implementing some concepts and approaches described in the literature.
- ItemOpen Access3D reconstruction and camera calibration from 2D images(2000) Henrichsen, Arne; De Jager, GerhardA 3D reconstruction technique from stereo images is presented that needs minimal intervention from the user. The reconstruction problem consists of three steps, each of which is equivalent to the estimation of a specific geometry group. The first step is the estimation of the epipolar geometry that exists between the stereo image pair, a process involving feature matching in both images. The second step estimates the affine geometry, a process of finding a special plane in projective space by means of vanishing points. Camera calibration forms part of the third step in obtaining the metric geometry, from which it is possible to obtain a 3D model of the scene. The advantage of this system is that the stereo images do not need to be calibrated in order to obtain a reconstruction. Results for both the camera calibration and reconstruction are presented to verify that it is possible to obtain a 3D model directly from features in the images.
- ItemOpen AccessA 500kHz-5MHz CW stepped frequency borehole tomographic imaging system(2001) Isaacson, Adam Rhett; Inggs, MichaelThis dissertation involves a study of Cross-Borehole Tomography. The mathematical physical models of the Radon Transform are reviewed. The entire Cross-Borehole Tomographic process is simulated, based on these physical models of the Radon Transform. The system specifications for the final design are based on the results from the simulation. Finally, the final design is built, and tested. The phase yields a better quality of image reconstruction when compared to amplitude, and hence a coherent system is a good choice. The system is frequency to frequency coherent for the entire transmit frequency range, which satisfies the main aim of this dissertation.
- ItemOpen Access9-Phase inverter driven motor(1983) Hoffman, Keith PaulThe behaviour of a 9-phase squirrel cage induction motor is studied when it is excited with unmodulated and chopper modulated quasi-square phase voltages. A 9-phase bridge inverter, which produces the quasi-square waveforms, and its digital controller have been constructed and tested. A theoretical analysis is included, which shows the influence of phase current harmonics upon output torque.
- ItemOpen AccessA comparison of three class separability measures(2004) Mthembu, N S; Greene, J RMeasures of class separability can provide valuable insights into data, and suggest promising classification algorithms and approaches in data mining. We compare three simple class separability measures used in supervised machine learning. Their relative effectiveness is evaluated through their functional relationships and their random projections of data onto R 2 for visualization. We conclude that the simple direct class separability measure of a dataset is an easier and more informative measure for separability than the class scatter matrices approach and it correlates well with Thornton’s Separability’s index.
- ItemOpen AccessA computerised electrical installation design tool(1986) Healy, Richard B; MaCLaren, S C
- ItemOpen AccessA contact-implicit direct trajectory optimization scheme for the study of legged maneuverability(2022) Shield, Stacey; Patel, AmirFor legged robots to move safely in unpredictable environments, they need to be manoeuvrable, but transient motions such as acceleration, deceleration and turning have been the subject of little research compared to constant-speed gait. They are difficult to study for two reasons: firstly, the way they are executed is highly sensitive to factors such as morphology and traction, and secondly, they can potentially be dangerous, especially when executed rapidly, or from high speeds. These challenges make it an ideal topic for study by simulation, as this allows all variables to be precisely controlled, and puts no human, animal or robotic subjects at risk. Trajectory optimization is a promising method for simulating these manoeuvres, because it allows complete motion trajectories to be generated when neither the input actuation nor the output motion is known. Furthermore, it produces solutions that optimize a given objective, such as minimizing the distance required to stop, or the effort exerted by the actuators throughout the motion. It has consequently become a popular technique for high-level motion planning in robotics, and for studying locomotion in biomechanics. In this dissertation, we present a novel approach to studying motion with trajectory optimization, by viewing it more as “trajectory generation” – a means of generating large quantities of synthetic data that can illuminate the differences between successful and unsuccessful motion strategies when studied in aggregate. One distinctive feature of this approach is the focus on whole-body models, which capture the specific morphology of the subject, rather than the highly-simplified “template” models that are typically used. Another is the use of “contact-implicit” methods, which allow an appropriate footfall sequence to be discovered, rather than requiring that it be defined upfront. Although contact-implicit methods are not novel, they are not widely-used, as they are computationally demanding, and unnecessary when studying comparatively-predictable constant speed locomotion. The second section of this dissertation describes innovations in the formulation of these trajectory optimization problems as nonlinear programming problems (NLPs). This “direct” approach allows these problems to be solved by general-purpose, open-source algorithms, making it accessible to scientists without the specialized applied mathematics knowledge required to solve NLPs. The design of the NLP has a significant impact on the accuracy of the result, the quality of the solution (with respect to the final value of the objective function), and the time required to solve the problem
- ItemOpen AccessA fluid loop actuator for active spacecraft attitude control - A Parametric Sizing Model and the Design, Verification, Validation and Test with a Prototype on an Air Bearing(2019) Martens, Bas; Martinez, PeterActive spacecraft attitude control by using a pumped fluid as the inertial mass has potential advantages over reaction wheels, including high torque, lower power consumption, reduced jitter and prolonged lifetime. Previous work addressed conceptual and mission-specific control aspects, and one fluid loop has flown on a demonstration mission. In this dissertation, a parametric sizing model is developed that can optimize a fluid loop for any mission, based on pump capabilities and customer requirements. The model can be applied to circular, square and helical fluid loops, and includes the power consumption due to viscous friction. A configurable prototype was developed to verify the model, as well as a spherical air bearing to verify the rotational aspects of the various fluid loop configurations. The model was applied to various hypothetical missions. In conclusion, the fluid loop has the fundamental potential to replace reaction wheels in a wide variety of satellites above approximately 20 kg, if mass is carefully optimized and efforts are made to develop a suitable pump. This is considered worthwhile, as the actuator comes with many potential advantages.
- ItemOpen AccessA GPU based X-Engine for the MeerKAT Radio Telescope(University of Cape Town, 2020) Callanan, Gareth Mitchell; Winberg, SimonThe correlator is a key component of the digital backend of a modern radio telescope array. The 64 antenna MeerKAT telescope has an FX architecture correlator consisting of 64 F-Engines and 256 X-Engines. These F- and X-Engines are all hosted on 128 custom designed FPGA processing boards. This custom board is known as a SKARAB. One SKARAB X-Engine board hosts four logical X-Engines. This SKARAB ingests data at 27.2 Gbps over a 40 GbE connection. It correlates this data in real time. GPU technology has improved significantly since SKARAB was designed. GPUs are now becoming viable alternatives to FPGAs in high performance streaming applications. The objective of this dissertation is to investigate how to build a GPU drop-in replacement X-Engine for MeerKAT and to compare this implementation to a SKARAB X-Engine. This includes the construction and analysis of a prototype GPU X-Engine. The 40 GbE ingest, GPU correlation algorithm and the software pipeline framework that links these two together were identified as the three main sub-systems to focus on in this dissertation. A number of different tools implementing these sub-systems were examined with the most suitable ones being chosen for the prototype. A prototype dual socket system was built that could process the equivalent of two SKARABs worth of X-Engine data. This prototype has two 40 GbE Mellanox NICS running the SPEAD2 library and a single Nvidia GeForce 1080Ti GPU running the xGPU library. A custom pipeline framework built on top of the Intel Threaded Building Blocks (TBB) library was designed to facilitate the ow of data between these sub-systems. The prototype system was compared to two SKARABs. For an equivalent amount of processing, the GPU X-Engine cost R143 000 while the two SKARABs cost R490 000. The power consumption of the GPU X-Engine was more than twice that of the SKARABs (400W compared 180W), while only requiring half as much rack space. GPUs as X-Engines were found to be more suitable than FPGAs when cost and density are the main priorities. When power consumption is the priority, then FPGAs should be used. When running eight logical X-Engines, 85% of the prototype's CPU cores were used while only 75% of the GPU's compute capacity was utilised. The main bottleneck on the GPU X-Engine was on the CPU side of the server. This report suggests that the next iteration of the system should offload some CPU side processing to the GPU and double the number of 40 GbE ports. This could potentially double the system throughput. When considering methods to improve this system, an FPGA/GPU hybrid X-Engine concept was developed that would combine the power saving advantage of FPGAs and the low cost to compute ratio of GPUs.
- ItemOpen AccessA machine vision-based approach to measuring the size distribution of rocks on a conveyor belt(2004) Mkwelo, Simphiwe; de Jager, GerhardThis work involves the development of a vision-based system for measuring the size distribution of rocks on a conveyor belt. The system has applications in automatic control and optimization of milling machines, and the selection of optimal blasting methods in the mining industry. Rock size is initially assumed to be the projected rock surface area due to the constraint imposed by the 2D nature of images. This measurement is facilitated by locating connected rock-edge pixels. Rock edge detection is achieved using a watershed-based segmentation process. This process involves image pre-filtering with edge preserving filters at various degrees of filtering. The output of each filtering stage is retained and marker-driven watersheds are applied on each output resulting to traces of detected rock boundaries. Watershed boundary selection is then applied to select boundaries which are most likely to be rock edges based on rock features. Finally, rock recognition using feature classification is applied to remove non-rock watershed boundaries. The projected rock area distribution of a test-set is measured and compared to corresponding projected areas of manually segmented images. The obtained distributions are found to be similar with an RMS error of 2.37% on the test-set. Finally, sieve data is collected in the form of actual rock size distributions and a quantitative comparison between the actual and machine measured distributions is performed. The overall quantitative result is that the two rock size distributions are significantly different. However, after incorporating a stereology-based correction, hypothesis tests on a 3m belt-cut test-set show that the obtained distributions are similar.
- ItemOpen AccessA microcomputer controller for a nylon spinning machine(1985) Kirk, Terence Enfield; Braae, MartinThis thesis will show how a new type of controller for a Nylon spinning machine was developed from an initial specification. The controller is a component in a loosely coupled feedback system which reads two tachometer pulse trains and various plant interlocks and produces two pulse trains which are used to control two solid state variable frequency variable voltage inverters and their AC motors. The specification calls for 24 controllers to be linked to a PDP 11/23 host computer which holds a library of operating parameters which can be downloaded into each control unit by ~n operator. After examining the requirements of the system, a microcomputer implementation was chosen as· best meeting the needs of the project. Elsewhere in the plant several earlier attempts at using micro-computers as dedicated controllers had been made, with rather poor results. Consideration of the future requirements of the company showed that there was a clear role for these controllers, and it was clear that there was a need to define standards for their development and implementation, and so a survey of the company's requirements was done, on the basis of which a standard was adopted. The thesis covers ali system related aspects of the project, from the initial selection of a microcomputer system and software development system to the design and implementation of the controller.
- ItemOpen Access
- ItemOpen AccessA multi-user process interface system for a process control computer(1983) Sherlock, Barry Graham; Bradlow, HThis thesis describes a system to implement a distributed multi-user process interface to allow the PDP-11/23 computer in the Electrical Engineering department at UCT to be used for process control. The use of this system is to be shared between postgraduate students for research and undergraduates for doing real-time control projects. The interface may be used concurrently by several users, and access is controlled in such a way as to prevent users' programs from interfering with one another. The process interface hardware used was a GEC Micro-Media system, which is a stand-alone process interface system communicating with a host (the PDP-11/23) via a serial line. Hardware to drive a 600-metre serial link at 9600 baud between the PDP-11/23 and the Media interface was designed and built. The software system on the host, written in RTL/2, holds-all data from the interface in a resident common data-base and continually updates it. Access to the interface by applications programs is done indirectly by reading and writing to the database, for which purpose a library of user interface routines is provided. To allow future expansion and modification of the Media interface, software (also written in RTL/2) for an LSI-11 minicomputer interfaced to the Media bus was developed which emulates the operation of the GEC proprietary Micro-Media software. A program to download this software into the LSI-11 was written. A suite of diagnostic programs enables testing of the system hardware and software at various levels. To ease testing, teaching, and applications programming, a general-purpose simulation package for the simulation of analogue systems was developed, as well as graphics routines for use with a Tektronix 4010 plotting terminal. A. real-time computing project for a class of undergraduates was run in 1983. This project made extensive use of the system and demonstrated its viability.
- ItemOpen AccessA novel method for power system stabilizer design(2003) Chen, Lian; PetroianuPower system stability is defined as the condition of a power system that enables it to remain in a state of operating equilibrium under normal operating conditions and to regain an acceptable state of equilibrium after being subjected to a finite disturbance. In the evaluation of stability, the focus is on the behavior of the power system when subjected to both large and small disturbances. Large disturbances are caused by severe changes in the power system, e.g. a short-circuit on a transmission line, loss of a large generator or load, loss of a tie-line between two systems. Small disturbances in the form of load changes take place continuously requiring the system to adjust to the changing conditions. The system should be capable of operating satisfactorily under these conditions and successfully supplying the maximum amount ofload. Power system stability is defined as the condition of a power system that enables it to remain in a state of operating equilibrium under normal operating conditions and to regain an acceptable state of equilibrium after being subjected to a finite disturbance. In the evaluation of stability, the focus is on the behavior of the power system when subjected to both large and small disturbances. Large disturbances are caused by severe changes in the power system, e.g. a short-circuit on a transmission line, loss of a large generator or load, loss of a tie-line between two systems. Small disturbances in the form of load changes take place continuously requiring the system to adjust to the changing conditions. The system should be capable of operating satisfactorily under these conditions and successfully supplying the maximum amount ofload. This dissertation deals with the use of Power System Stabilizers (PSS) to damp electromechanical oscillations arising from small disturbances. In particular, it focuses on three issues associated with the damping of these oscillations. These include ensuring robustness of PSS under changing operating conditions, maintaining or selecting the structure of the PSS and coordinating multiple PSS to ensure global power system robustness. To address the issues outlined above, a new PSS design/tuning method has been developed. The method, called sub-optimal Hoo PSS design/tuning, is based on Hoo control theory. For the implementation of the sub-optimal Hoo PSS design/tuning method, various standard optimization methods, such as Sequential Quadratic Programming (SQP), were investigated. However, power systems typically have multiple "modes" that result in the optimization problem being non-convex in nature. To overcome the issue of non-convexity, the optimization algorithm, embedded in the 111 University of Cape Town sub-optimal Hoo PSS design/tuning method, is based on Population Based Incremental Learning (PBIL). This new sub-optimal Heo design/tuning method has a number of important features. The method allows for the selection of the PSS structure i.e. the designer can select the order and structure of the PSS. The method can be applied to the full model of the power system i.e. there is no need for using a reduced-order model. The method is based on Heo control theory i.e. it uses robustness as a key objective. The method ensures adequate damping of the electromechanical oscillations of the power system. The method is suitable for optimizing existing PSS in a power system. This method improves the overall damping of the system and does not affect the observability of the system poles. To demonstrate the effectiveness of the sUb-optimal Hoo PSS design/tuning method, a number of case studies are presented in the thesis. The sub-optimal Hoo design/tuning method is extended to allow for the coordinated tuning of multiple controllers. The ability to tune multiple controllers in a coordinated manner allows the designer to focus on the overall stability and robustness of the power system, rather than focusing just on, the local stability of the system as viewed from the generator where the controllers are connected.
- ItemOpen AccessA review of the development of environmental impact assessment legislation in selected African countries(1994) Mubangizi, John CantiusLegislation by its nature is a dynamic process. In order to keep abreast with changing circumstances and demands, new legislation often has to be introduced and old legislation amended. Recent global developments have led to widespread environmental awareness and the need for better methods of environmental protection. As a result there have been remarkable developments in the field of environmental law during the last two or three decades. Environmental Impact Assessment legislation is one area in which changes have been quite profound. This study takes into account such changes and it is for this reason that I have to point out that the information contained in this study is valid as at the end of August 1994.
- ItemOpen AccessA simple method for visualizing labelled and unlabelled data in high-dimensional spaces(2004) Greene, J RThe low-dimensional visualisation of highdimensional data is a valuable way of detecting structure (such as clusters, and the presence of outliers) in the data, and avoiding some of the pitfalls of blind data manipulation. Projection based on principal component analysis is widely employed and often useful, but it is a variancepreserving projection which takes no account of class labels, and may, for this reason, hide significant structure. Here we present a very simple method which appears to yield useful visualizations for many datasets. It is based on a random search for a linear transformation, and projection into a twodimensional visual space, which maximises an objective measure of class separability in the visual space. The method, which can be thought of as a variant of projection pursuit with a novel interest measure, is demonstrated on datasets from the UCI Repository. Tentative interim results are also given for a proposed extension based on spectral clustering, for extending the method to unlabelled data.
- ItemOpen AccessA spaceborne Synthetic Aperture Radar (SAR) processor design(1991) Kritzinger, Paul Johan; Inggs, M R