Browsing by Subject "electrical engineering"
Now showing 1 - 20 of 25
Results Per Page
Sort Options
- ItemOpen AccessA contact-implicit direct trajectory optimization scheme for the study of legged maneuverability(2022) Shield, Stacey; Patel, AmirFor legged robots to move safely in unpredictable environments, they need to be manoeuvrable, but transient motions such as acceleration, deceleration and turning have been the subject of little research compared to constant-speed gait. They are difficult to study for two reasons: firstly, the way they are executed is highly sensitive to factors such as morphology and traction, and secondly, they can potentially be dangerous, especially when executed rapidly, or from high speeds. These challenges make it an ideal topic for study by simulation, as this allows all variables to be precisely controlled, and puts no human, animal or robotic subjects at risk. Trajectory optimization is a promising method for simulating these manoeuvres, because it allows complete motion trajectories to be generated when neither the input actuation nor the output motion is known. Furthermore, it produces solutions that optimize a given objective, such as minimizing the distance required to stop, or the effort exerted by the actuators throughout the motion. It has consequently become a popular technique for high-level motion planning in robotics, and for studying locomotion in biomechanics. In this dissertation, we present a novel approach to studying motion with trajectory optimization, by viewing it more as “trajectory generation” – a means of generating large quantities of synthetic data that can illuminate the differences between successful and unsuccessful motion strategies when studied in aggregate. One distinctive feature of this approach is the focus on whole-body models, which capture the specific morphology of the subject, rather than the highly-simplified “template” models that are typically used. Another is the use of “contact-implicit” methods, which allow an appropriate footfall sequence to be discovered, rather than requiring that it be defined upfront. Although contact-implicit methods are not novel, they are not widely-used, as they are computationally demanding, and unnecessary when studying comparatively-predictable constant speed locomotion. The second section of this dissertation describes innovations in the formulation of these trajectory optimization problems as nonlinear programming problems (NLPs). This “direct” approach allows these problems to be solved by general-purpose, open-source algorithms, making it accessible to scientists without the specialized applied mathematics knowledge required to solve NLPs. The design of the NLP has a significant impact on the accuracy of the result, the quality of the solution (with respect to the final value of the objective function), and the time required to solve the problem
- ItemOpen AccessAfrican perspective on integrated space and air traffic management(2019) Gairiseb, Alexander; Martinez, PeterSpace Traffic Management (STM) is an emerging area of interest in the space sector because States and private actors are collaborating on ways to manage the growing congestion in orbit and to mitigate the impact of space debris and space weather as part of sustainable use and exploration of outer space. Further, the pace at which commercial space operations is mushrooming and the potential for growth that the suborbital space flight market presents has led to talks about integrating space and air traffic management, through technological interfaces and harmonised regulatory regimes. But, the current global challenge is the lack of a legal framework, either in the existing space-related treaties or the adoption of a new treaty regulating STM similar to the other traffic regimes, namely aviation and maritime, and advancement in technology to seamlessly integrate Space Traffic Management (STM) and Air Traffic Management (ATM). Therefore, the proposed integration of space and air traffic management necessitates an analysis of African perspectives when it comes to consolidating the two traffic regimes, taking into account the fact that ATM in Africa is fragmented. Hence, this study analyses the legal aspects of integrating Space and Air Traffic Management from the African perspective.
- ItemOpen AccessAn open source array antenna toolbox implementation(2021) Jaffer, Abubaker; Schonken, FrancoisAround the early 1900's, the first transmission of radio waves by means of phased array antennas was demonstrated. Since then, the further development of phased array radars was largely driven from a military point of view, with operational phased array systems implemented as far back as the second World War. In recent times, phased array antenna systems have become much more prevalent, not only in a military context but also increasingly in the commercial space, with array antenna implementations used in satellite communication systems and direction finding systems around the world. With this in mind, the following dissertation presents the development, functionality and results of a phased array simulation toolbox developed in the open source programming language, Julia. Some of the main concepts demonstrated in this dissertation include various array antenna configurations, including linear, planar, circular, cylindrical, spherical and conical, each of which are customisable in terms of beam steering, number of array elements, inter-element spacings and in some cases, array tapering to name but a few. The aim behind the development of the array toolbox is to provide array antenna enthusiasts and students with a simple to use simulation package, enabling an investigation into the effect of various array antenna parameters on the output antenna pattern produced
- ItemOpen AccessAssessment of the potential for developing a micro-launcher industry in South Africa(2019) Campbell, Victoria; Martinez, PeterSmall satellites have dramatically lowered the barriers to participating in space activities for many emerging countries, including South Africa. The rapid up-take of this facet of space technology has spurred the development of several micro-launchers dedicated to lofting small satellites to low Earth orbit. However, the majority of these micro-launcher initiatives and the majority of spaceports in use are located in the northern hemisphere, and there are currently no operational spaceports in Africa. In this study the potential for developing a micro-launcher industry in South Africa is explored, building on the launch facilities established for the previous space programme of the 1980s and early 1990s, and existing capabilities in present-day academic institutions and industry. Potential markets, financial requirements, technical feasibility, available infrastructure, and regulatory and policy aspects of such a venture are reviewed with respect to South Africa’s current political situation and attitude towards space activities. Several possible options for establishing small satellite launch capabilities in South Africa are used as a framework to assess the feasibility of a micro-launcher industry in South Africa. These range from a simple “ship and shoot” scenario with no indigenously developed technology to more complex cooperative arrangements which would, to varying degrees, require technology transfers and cooperation with potential international partners.
- ItemOpen AccessDesign and analysis of highspeed electronics for electro optical payload of small satellites(2022) Naveed, Mohammad; Martinez, PeterWith the increase in the resolution of the Earth observation satellites, the cameras on these satellites require more detectors to fulfil the swath need and also the image sensors have to operate at a very high-speed with the sensor electronics requiring faster clock rates and larger bandwidth. The sensor data handler has to transfer a large amount of data to the spacecraft in real time incorporating the outcomes of the signal integrity and power integrity analysis in the design. High-speed analysis is an important consideration for high resolution cameras and is often performed on the satellites. This research work aims towards presenting the design and analysis of high-speed electronics for small Earth observation satellites. A methodology will be defined for the designing of high-speed electronics that will involve both the pre-layout and post-layout designs for signal and power integrity analysis. The proposed research work also provides the pre-layout and post-layout signal integrity analysis of the high-speed electronics and interfaces and it will also validate the signal integrity performance of the module by comparing it with standard performance parameters. Similarly, we will perform a pre-layout and post-layout power integrity analysis of the high-speed electronics and interfaces and its effects on the power lines and power planes.
- ItemOpen AccessDesign consideration, trade-off and performance analysis of a high-resolution optical telescope design for a satellite(2023) Raza, Asim; Martinez, PeterHigh resolution satellite imagery serves a more and more important role in applications ranging from environmental protection, disaster response and precision farming to defence and security. The design of high resolution payload requires relevant technical expertise, expensive equipment and software. Different aspects of telescopes have been researched separately but process of translation of system requirements into actual optical design of high resolution payload is unique and challenging work for this dissertation. This research will help to understand system level approach to design complex system. The scope of this research to identify optics requirements from system requirements, explore optical design concept and trade-off of concepts based on system requirements of high resolution optical payload. Basically, this research will follow SMAD process to explore design engineering of optical payload from objective, requirement to detail design. This research will also include different optical layout and their performance analysis in terms of MTF and tolerances. This research does not include opto-mechanical design, thermal design, focal plane array assembly, manufacturing and detailed AIT procedure.
- ItemOpen AccessDevelopment of a multilevel converter topology for transformer-less connection of renewable energy systems(2023) Ajayi-Obe, Akinola Ayodeji; Khan, AzeemThe global need to reduce dependence on fossil fuels for electricity production has become an ongoing research theme in the last decade. Clean energy sources (such as wind energy and solar energy) have considerable potential to reduce reliance on fossil fuels and mitigate climate change. However, wind energy is going to become more mainstream due to technological advancement and geographical availability. Therefore, various technologies exist to maximize the inherent advantages of using wind energy conversion systems (WECSs) to generate electrical power. One important technology is the power electronics interface that enables the transfer and effective control of electrical power from the renewable energy source to the grid through the filter and isolation transformer. However, the transformer is bulky, generates losses, and is also very costly. Therefore, the term "transformer-less connection" refers to eliminating a step-up transformer from the WECS, while the power conversion stage performs the conventional functions of a transformer. Existing power converter configurations for transformer-less connection of a WECS are either based on the generator-converter configuration or three-stage power converter configuration. These configurations consist of conventional multilevel converter topologies and two-stage power conversion between the generator-side converter topology and the high-order filter connected to the collection point of the wind power plant (WPP). Thus, the complexity and cost of these existing configurations are significant at higher voltage and power ratings. Therefore, a single-stage multilevel converter topology is proposed to simplify the power conversion stage of a transformer-less WECS. Furthermore, the primary design challenges – such as multiple clamping devices, multiple dc-link capacitors, and series-connected power semiconductor devices – have been mitigated by the proposed converter topology. The proposed converter topology, known as the "tapped inductor quasi-Z-source nested neutral-point-clamped (NNPC) converter," has been analyzed, and designed, and a prototype of the topology developed for experimental verification. A field-programmable gate array (FPGA)-based modulation technique and voltage balancing control technique for maintaining the clamping capacitor voltages was developed. Hence, the proposed converter topology presents a single-stage power conversion configuration. Efficiency analysis of the proposed converter topology has been studied and compared to the intermediate and grid-side converter topology of a three-stage power converter configuration. A direct current (DC) component minimization technique to minimize the dc component generated by the proposed converter topology was investigated, developed, and verified experimentally. The proposed dc component minimization technique consists of a sensing and measurement circuitry with a digital notch filter. This thesis presents a detailed and comprehensive overview of the existing power converter configurations developed for transformer-less WECS applications. Based on the developed 2 comparative benchmark factor (CBF), the merits and demerits of each power converter configuration in terms of the component counts and grid compliance have been presented. In terms of cost comparison, the three-stage power converter configuration is more cost-effective than the generatorconverter configuration. Furthermore, the cost-benefit analysis of deploying a transformer-less WECSs in a WPP is evaluated and compared with conventional WECS in a WPP based on power converter configurations and collection system. Overall, the total cost of the collection system of WPP with transformer-less WECSs is about 23% less than the total cost of WPP with conventional WECs. The derivation and theoretical analysis of the proposed five-level tapped inductor quasi-Z-source NNPC converter topology have been presented, emphasizing its operating principles, steady-state analysis, and deriving equations to calculate its inductance and capacitance values. Furthermore, the FPGA implementation of the proposed converter topology was verified experimentally with a developed prototype of the topology. The efficiency of the proposed converter topology has been evaluated by varying the switching frequency and loads. Furthermore, the proposed converter topology is more efficient than the five-level DC-DC converter with a five-level diode-clamped converter (DCC) topology under the three-stage power converter configuration. Also, the cost analysis of the proposed converter topology and the conventional converter topology shows that it is more economical to deploy the proposed converter topology at the grid side of a transformer-less WECS.
- ItemOpen AccessDevelopment of a power conditioner for a PMSG-based wind energy system integrated into a weak grid(2020) Khan, Akrama; Khan, Mohamed; Malengret MichelWith the growing use of non-linear loads and due to their ever changing nature, electricity networks experience power imbalance continually. These non-linear asymmetrical loads draw distorted unbalanced currents and voltages at the point of common coupling (PCC) which propagate into the distribution network. Power quality has therefore become an important issue, which has resulted in the development of numerous control strategies and other interventions to maintain the integrity of the electric network. Recent advancements in power electronics have provided new ways to optimize power systems by regulating the active power transfer. These developments lead to opportunities for renewable energy systems to harness energy and at the same time inject optimized currents into the network by means of distributed units. An emerging problem with most such units is that they are located far from the PCC and are usually designed for the small linear loads. Moreover, the problem is exacerbated during overload conditions when the voltage level drops below the allowed minimum level due to the high network impedance which characterizes a weak grid. This thesis aims to study similar scenarios where a permanent magnet synchronous generator (PMSG) based wind energy conversion system (WECS) is integrated into a weak AC grid. The system comprises of a machine-side (MSC) and a grid-side (GSC) converter, which provides available ancillary services and is envisaged to augment existing power quality conditioners such as STATCOM devices. To represent a weak grid, a Thevenin equivalent model of the electric network is considered with unbalanced loads. The main objective of this project is to transform the traditional converter topology into a versatile system that can perform as a power conditioner. In particular, it monitors a distribution line, sense changes in the load, detects faults and redistributes the currents to ensure maximized power transfer into the network. The system under consideration possesses the capability of independent injection of active and reactive currents within the defined limits. Since the system under consideration is integrated v into a weak grid, the perceived load is always considered to be unbalanced. Under the specified condition, if a fault occurs at one or two phases, unbalanced voltages are observed at the PCC. Two scenarios are created to perform the case study. Firstly, a no-fault case is considered with symmetrical voltages at the PCC. To ensure maximum power transfer into the network with least losses, a set of currents is injected according to the optimal current injection technique. Secondly, asymmetrical faults are considered at the PCC and currents are injected according to the coordinated sequence current injection technique. This technique defines a new current injection limit which not only improves the power transfer but also enhances the power factor. Furthermore, the peak magnitude of the three phase currents is also kept within the rated current limit. For both scenarios described above, the MSC regulates the DC link voltage so as to limit the active power coming from the generator according to the grid condition. The GSC however performs two important functions. It implements small active/reactive power perturbations for the impedance estimation, and once the impedances are determined, magnitudes of the required currents are calculated and injected based on the proposed techniques. Validation of the analysis is done experimentally on a 3.3kW PMSG connected to a programmable regenerative power supply which emulates a weak grid. The MSC and GSC utilized in this project are conventional two-level converters which are controlled by means of a FPGA based controller.
- ItemOpen AccessFusion of sensor information to measure the total energy of an aircraft and provide information about flight performance and local microclimate(2013) Johnson, Bruce; Verrinder, Robyn; Ginsberg, SamuelThe application of using Unmanned Aerial Vehicles (UAVs) to locate thermal updraft currents is a relatively new topic. It was first proposed in 1998 by John Wharington, and, subsequently, several researchers have developed algorithms to search and exploit thermals. However, few people have physically implemented a system and performed field testing. The aim of this project was to develop a low cost system to be carried on a glider to detect thermals effectively. A system was developed from the ground up and consisted of custom hardware and software that was developed specifically for aircraft. Data fusion was performed to estimate the attitude of the aircraft; this was done using a direction cosine (DCM) based method. Altitude and airspeed data were fused by estimating potential and kinetic energy respectively; thus determining the aircraft's total energy. This data was then interpreted to locate thermal activity. The system comprised an Inertial Measurement Unit (IMU), airspeed sensor, barometric altitude sensor, Global Positioning System (GPS), temperature sensor, SD card and a realtime telemetry link. These features allowed the system to determine aircraft position, height, airspeed and air temperature in realtime. A custom-designed radio controlled (RC) glider was constructed from composite materials in addition to a second 3.6 m production glider that was used during flight testing. Sensor calibration was done using a wind tunnel with custom designed apparatus that allowed a complete wing with its pitot tube to be tested in one operation. Flight testing was conducted in the field at several different locations over the course of six months. A total of 25 recorded flights were made during this period. Both thermal soaring and ridge soaring were performed to test the system under varying weather conditions. A telemetry link was developed to transfer data in realtime from the aircraft to a custom ground station. The recorded results were post-processed using Matlab and showed that the system was able to detect thermal updrafts. The sensors used in the system were shown to provide acceptable performance once some calibration had been performed. Sensor noise proved to be problematic, and time was spent alleviating its effects. Results showed that the system was able to measure airspeed to within ± 1 km/h. The standard deviation of the altitude estimate was determined to be 0.94 m. This was deemed to be satisfactory. The system was highly reliable and no faults occurred during operation. In conclusion, the project showed that inexpensive sensors and low power microcontrollers could be used very effectively for the application of detecting thermals.
- ItemOpen AccessInvestigation of ground moving target indication techniques for a multi-channel synthetic aperture radar(2020) Mosito, Katlego Ernest; Abdul Gaffar, Mohammed Yunus; de Witt, Josias JacobusSynthetic Aperture Radar (SAR) is an imaging technique that creates two dimensional images of the scattering objects in the illuminated ground scene. The objects in the illuminated ground scene may be truly stationary, e.g. buildings etc. or in motion relative to these stationary objects, e.g. cars on a highway. In SAR, the radar platform is moving during the imaging period, hence everything that the radar illuminates has motion relative to the radar platform. In order to specifically detect objects on the ground that are moving relative to stationary ground objects (often termed clutter), processing techniques called Ground Moving Target Indication (GMTI) techniques are required. This is especially required for targets that are moving at relative velocities lower than the stationary clutter's relative velocity to the radar platform (endo-clutter detection). This dissertation investigates five multichannel GMTI techniques being Displaced Phase Centre Antenna (DPCA), Along Track Interferometry (ATI), Iterative Adaptive Approach (IAA), Space Time Adaptive Processing (STAP) and Velocity SAR (VSAR) in literature and assesses the performance of two selected GMTI techniques (ATI and DPCA) on simulated and measured radar data to compare them and identify their strengths and weaknesses. The radar data were measured with a C-band FMCW radar in a controlled environment with known parameters and cooperating targets. The performances of the techniques were assessed in terms of moving target detection within clutter and sensitivity to inaccuracies in the physical system setup. The DPCA technique exhibited some attractive characteristics over the ATI technique. These included its robustness against false alarm in noise dominated cells - ATI exhibited large phase residuals in noise dominated cells, due to the random nature of the phase in these cells. Furthermore, DPCA seem to not suffer from false alarms due to volumetric scattering of vegetation to the extent that was observed with ATI. Lastly, DPCA exhibited more robustness against temporal misalignment errors introduced between the measurement channels, compared to ATI. These observations lead to the conclusion that DPCA would be a practically better choice to implement for the purpose of moving target detection, compared to ATI. However, a double threshold approach, which used DPCA as a pre-processing step to ATI, proved to be superior to DPCA alone in terms of moving target indication within clutter and noise. This approach was verified through implementation on the measured radar data in this study.
- ItemOpen AccessInvestigation of stability for composite operational amplifiers(1987) Schneider, Carl; Reineck, K. MIn the study of a paper by Campbell and Stephenson [1] on the possibilities of employing composite operational amplifiers to extend the high frequency performance of conventional RC active filters · it became evident that theoretical predictions of stability and experimental results did not agree. Other publications concerned with composite operational amplifiers merely presented circuits where it was implied that these would work under most conditions. However, when one set out to build such amplifiers and investigated their behaviour in frequency filters it emerged that severe stability problems beset such studies. The work here presented was initiated by the fact that hitherto the problem of stability had not received the attention that it warranted. Experimental results obtained by Campbell and . Stephenson made use of the Composite Two Operational Amplifier ( C20A ) by Mikhael and Nessim [2]. It was for this reason that investigations presented here also made use of this amplifier Theoretical studies by Campbell and Stephenson showed significant deviations from the experimental results, something which obviously required further investigation. By using the Nyquist Diagram Stability analysis technique to determine the stability of the open loop system it became possible to investigate the effect of the higher order terms of operational amplifier models. In fact, using Millman's [37] single, double, and triple pole models of the operational amplifier the results obtained came close to those obtained by the experiments. However. the procedure involves lengthy mathematical manipulations, and it was therefore decided to apply a standard stability evaluation technique of sufficient accuracy. The relatively simple Routh Criterion Stability analysis technique was considered where stability could be very conveniently established.
- ItemOpen AccessJoint radio resource management in heterogeneous wireless networks(2008) Suleiman, Kamil HussienThe complementary nature of wireless access technologies has resulted in the concept of the integration of overlaid wireless access networks to create a robust and ubiquitous system called heterogeneous wireless networks. In such a system, any mobile user running any application can connect to any of the available access technologies. The envisaged heterogeneous wireless networks will create a more efficient and cost-effective system for service providers and better services to network users. However, the integration of different access technologies is a considerable technological challenge. Designing a good joint radio resource management (JRRM) scheme for a heterogeneous wireless network is part of the integration challenges. The design goal of a JRRM is optimising the trade-off between resource utilisation, quality of service provisioning, fairness, and simplicity of design, amongst others. The diversity of wireless access networks and the resulting complexity of interworking different access networks leave many issues of the JRRM open. The fact that many analytical as well as simulation models of heterogeneous wireless networks are not based on fairly realistic assumptions often affects the practicality of research outputs. In this work, we design JRRM taking into consideration different features of component access technologies of a heterogeneous wireless environment. We also consider different important realistic features of access networks, such as asymmetry of their overlap and the variability of network resources according to traffic conditions. We use the network simulator (ns-2) to validate our design.
- ItemOpen AccessLong-term reactivity transient analysis in an extended loss of all AC power at the Koeberg nuclear plant(2022) Foster, Neil; Aschman, DavidThis study investigates, through neutronics and thermodynamic modelling of the Koeberg Nuclear Station, a 900 MW pressurized water reactor (PWR), whether extended loss of all AC power (ELAP) scenarios will result in an uncontrolled return to criticality. The purpose is to determine whether recriticality can be avoided without performing plant modifications, or significantly adjusting the existing procedures to depressurize the primary system. The concern is that many currently proposed solutions introduce the risk of further complications, such as a loss of reactor coolant accident (LOCA). The Fukushima Daiichi accident in 2011 was caused by a tsunami that resulted in an extended loss of all AC power at the power station, causing a major nuclear accident, highlighting the importance of introducing measures to deal with ELAP scenarios over and above the short-term AC power loss scenarios previously catered for. The immediate concern in an ELAP is to ensure sufficient removal of heat from the primary system. This is achieved by providing adequate secondary side feed-water and steam-evacuation capability from the steam generators, and maintaining an adequate volume of borated water in the primary system, to prevent overheating of the reactor core. However, in the ELAP scenario, a long-term reactivity concern develops as additional reactivity is introduced into the core by the cooling of the moderator (during the course of recovery actions) which is compounded by the decay of neutron poisons in the core. The xenon isotope 135Xe is a significant neutron poison which accumulates in the fuel of an operating nuclear reactor. Its presence in the fuel of a recently-tripped reactor initially helps to maintain subcriticality, but this eventually decays away. This depletion starts at about 8 hours after a reactor trip, gradually adding more reactivity to the core. If borated water cannot be injected into the primary system to compensate for this, the reactor could return to criticality in an uncontrolled manner. Although this would be self-limiting (due to re-heating of the moderator and fuel), and not catastrophic in itself, the undesired generation of additional nuclear power in the reactor would, by consuming already limited supplies of cooling water, decrease the time available for recovery before the fuel in the reactor core overheats and melts. Utilities have made proposals to maintain sub-criticality in an ELAP scenario, which do not require AC power. They involve introducing boron into the primary system. They are often costly, and some proposals require the operator to open the primary system relief valves, introducing an additional risk of a failure of the relief valves to close, leading to a loss of coolant accident, and a core melt. The primary aim of the study was to determine, in an ELAP scenario, whether it is necessary to provide additional boron injection (over and above the existing accumulator inventory) to maintain the reactor sub-criticality. This was achieved using neutronics and thermodynamic modelling of Koeberg Nuclear Power Station. Another area of focus was to confirm that the existing cooldown strategies to mitigate ELAP events, are sufficient to maintain sub-criticality. After modelling and assessing the ELAP scenario over four different stages of the fuel cycle, it was concluded that, with best estimate assumptions, the selected reactivity acceptance criteria were met. However, with assumptions that envelope the majority of fuel cycle variances and code uncertainty (i.e. greater than 97.5% of cases), meeting the acceptance criteria for the latter part of the fuel cycle could not be demonstrated. Some potential solutions to ensure long-term sub-criticality are proposed.
- ItemOpen AccessMeasurements and finite element modelling of transformer flux with dc and power frequency current(2019) Chisepo, Hilary Kudzai; Gaunt, Charles T; Folly, Komla AGeomagnetically induced currents (GIC’s) caused by solar storms or other sources of dc excitation in the presence of ac energization can disturb the normal operation of power transformers. If large enough, they cause half-cycle saturation of a power transformer’s core which could lead to overheating due to excessive stray flux. Finite element matrix (FEM) modelling software is of considerable use in transformer engineering as it is able to solve electromagnetic fields in transformers. For many problems, typically involving only specific parts of a transformer, fairly accurate solutions can be reached quickly. Modelling the effects of GIC or leakage currents from dc systems, however, is more complex because dc components are superimposed on ac in transformers with nonlinear electrical core steel parameters. At the beginning of the investigation, FEM models of different bench-scale laboratory transformers and a 40 MVA three-phase three limb power transformer were investigated, but the results did not sufficiently represent the measurement data due to the application of widely used modelling assumptions regarding the transformer joints. Following the preliminary analyses, practical measurements and FEM simulations were carried out using three industrially made model single-phase four limb transformers (1p4L) without tanks. These test transformers resemble a real power transformer because they have high-quality grain oriented electrical core steel and parallel winding assemblies. Practical laboratory measurements recorded during ac testing were used to calibrate 2D FEM models by adding “equivalent air gaps” at the joints. The implementation of this joint detail helped to overcome the shortcomings of the preliminary FEM simulation. Analyses of the electrical and magnetic responses of the FEM models using simultaneous ac and dc then followed. A refined 3D FEM simulation with more detailed modelling of the core joints of 1p4L model transformers agreed more closely with the practical measurements of ac only no-load conditions. Further, the depiction of stray flux leaving the transformer’s saturated core under simultaneous ac and dc excitation showed an improvement in the approach as measured in the physical model. Saturation inductance (Lsat) is an important parameter for input into mid- to low-frequency lumped parameter transformer models that are used in electromagnetic transients software such as PSCAD/EMTDC, but it is not easily measured and is seldom provided by manufacturers. Some Lsat measurements on the 1p4L test transformers are presented in this thesis, along with some 3D FEM analyses. The measurements and FEM analyses investigated “air core inductance” which represents a transformer without a core, and “terminal saturation inductance” which represents deep saturation due to dc excitation. An important finding in this thesis is that “terminal saturation inductance” is the more useful of the two for topological transformer models investigating realistic GIC excitation. Further to this, a new composite depiction of half-cycle saturation with a multi-parametric relationships supported by measurement and simulation is presented. The main contribution of this thesis is that it gives more accurately the electrical response and distribution of the leakage flux under conditions such as those caused by GIC or other sources of leakage dc excitation, as well as including of joint details in the FEM models through calibration with physical models. This calibration can aid transformer modelling and design in industry for mitigation of the effects of GICs, contributing to improved transformer survival during significant geomagnetic disturbances.
- ItemOpen AccessModeration of high-energy fast neutrons in beryllium from a tokamak fusion reactor and heat transfer to the cooling water system(2020) Ellis, Benjamin; Leadbeater, Thomas; Hutton, TanyaA modeling demonstration of the moderation of 14.1 MeV primary neutrons in beryllium emitted from a D-T fusion nuclear reaction. The energy deposited from neutron-beryllium interactions which produces heat in the blanket of a fusion tokamak. A review of literature and data available for neutron-beryllium interactions is provided to support the MC software of a simplified model of the ITER first wall and blanket. Energy deposited in regions of the model using FLUKA are used to calculate a polynomial heat flux profile through the model. One dimensional conductive heat transfer through the model is performed and the cooling capacity of the coolant channels via convective heat transfer is explored.
- ItemOpen AccessMulti-wavelength observations of sprites over southern Africa(2020) Nnadih, Stanislaus Ogechukwu; Martinez, Peter; Kosch, MichaelSprites are short-lived, optical upper atmospheric lightning-induced phenomena that occur above an active thunderstorm, at an altitude range of 40 - 85 km. They are often described as electrical discharges in the mesosphere, following mostly large positive cloud-to-ground lightning discharges. Since their discovery in the late 1980s, sprites have been observed extensively in other continents except in Africa, where there is little or no active sprites-related research. Despite the numerous observations of sprites to date, there is no conclusive study that has reviewed the electron energies and the strength of the electric field within sprites as a function of altitude. This thesis presents the _rst ground-based observations of sprites in southern Africa. These observations were conducted at the South African Astronomical Observatory, Sutherland, South Africa, during the austral summer of 2015/2016, 2016/2017 and 2017/2018, as well as at a site that is close to South African Square Kilometer Array, Carnarvon in 2017/2018. Sprites were observed using multiple cameras that were filtered at specific wavelengths. In 5 out of 65 nights of observations, 113 video frames containing one or more sprites were recorded, comprising different morphologies (Carrot-single (10%), Carrot/Column (10%), Carrot-groups (37%), Column-groups (12%), Tree-like (4%), Unclassified (23%), Jelly-fish (3%)). These events were between 429 to 890 km away from the observer. The error in this distance estimate was ±5% of the distance. During these observations, the cloud-top temperatures of the storms that initiated these events was about -58 degrees Celsius. Sprite events observed at specific wavelengths suggest that the first positive band of N2 dominates at the upper altitudes (around 65 km). By using the Maxwell-Boltzmann energy distribution function in collisional plasma, the average characteristic electron energies and the strength of the electric fields in sprites were estimated as 5.5 eV and 150 V/m respectively, which were comparable to those inferred from space-based observations. The charge moment change of the lightning strokes associated with the observed events agreed with the threshold for dielectric breakdown of the mesosphere and correlates well with the observed sprites brightness. The study also showed that the average detection rate for sprites in southern Africa was 0.14 sprites/minutes and that the carrot-shaped sprites are usually accompanied by an increase in the charged moment as compared to the columniform sprites.
- ItemOpen AccessNeural networks in control engineering(1994) Trossbach, W; Braae, M.The purpose of this thesis is to investigate the viability of integrating neural networks into control structures. These networks are an attempt to create artificial intelligent systems with the ability to learn and remember. They mathematically model the biological structure of the brain and consist of a large number of simple interconnected processing units emulating brain cells. Due to the highly parallel and consequently computationally expensive nature of these networks, intensive research in this field has only become feasible due to the availability of powerful personal computers in recent years. Consequently, attempts at exploiting the attractive learning and nonlinear optimization characteristics of neural networks have been made in most fields of science and engineering, including process control. The control structures suggested in the literature for the inclusion of neural networks in control applications can be divided into four major classes. The first class includes approaches in which the network forms part of an adaptive mechanism which modulates the structure or parameters of the controller. In the second class the network forms part of the control loop and replaces the conventional control block, thus leading to a pure neural network control law. The third class consists of topologies in which neural networks are used to produce models of the system which are then utilized in the control structure, whilst the fourth category includes suggestions which are specific to the problem or system structure and not suitable for a generic neural network-based-approach to control problems. Although several of these approaches show promising results, only model based structures are evaluated in this thesis. This is due to the fact that many of the topologies in other classes require system estimation to produce the desired network output during training, whereas the training data for network models is obtained directly by sampling the system input(s) and output(s). Furthermore, many suggested structures lack the mathematical motivation to consider them for a general structure, whilst the neural network model topologies form natural extensions of their linear model based origins. Since it is impractical and often impossible to collect sufficient training data prior to implementing the neural network based control structure, the network models have to be suited to on-line training during operation. This limits the choice of network topologies for models to those that can be trained on a sample by sample basis (pattern learning) and furthermore are capable of learning even when the variation in training data is relatively slow as is the case for most controlled dynamic systems. A study of feedforward topologies (one of the main classes of networks) shows that the multilayer perceptron network with its backpropagation training is well suited to model nonlinear mappings but fails to learn and generalize when subjected to slow varying training data. This is due to the global input interpretation of this structure, in which any input affects all hidden nodes such that no effective partitioning of the input space can be achieved. This problem is overcome in a less flexible feedforward structure, known as regular Gaussian network. In this network, the response of each hidden node is limited to a -sphere around its center and these centers are fixed in a uniform distribution over the entire input space. Each input to such a network is therefore interpreted locally and only effects nodes with their centers in close proximity. A deficiency common to all feedforward networks, when considered as models for dynamic systems, is their inability to conserve previous outputs and states for future predictions. Since this absence of dynamic capability requires the user to identify the order of the system prior to training and is therefore not entirely self-learning, more advanced network topologies are investigated. The most versatile of these structures, known as a fully recurrent network, re-uses the previous state of each of its nodes for subsequent outputs. However, despite its superior modelling capability, the tests performed using the Williams and Zipser training algorithm show that such structures often fail to converge and require excessive computing power and time, when increased in size. Despite its rigid structure and lack of dynamic capability, the regular Gaussian network produces the most reliable and robust models and was therefore selected for the evaluations in this study. To overcome the network initialization problem, found when using a pure neural network model, a combination structure· _in which the network operates in parallel with a mathematical model is suggested. This approach allows the controller to be implemented without any prior network training and initially relies purely on the mathematical model, much like conventional approaches. The network portion is then trained during on-line operation in order to improve the model. Once trained, the enhanced model can be used to improve the system response, since model exactness plays an important role in the control action achievable with model based structures. The applicability of control structures based on neural network models is evaluated by comparing the performance of two network approaches to that of a linear structure, using a simulation of a nonlinear tank system. The first network controller is developed from the internal model control (IMC) structure, which includes a forward and inverse model of the system to be controlled. Both models can be replaced by a combination of mathematical and neural topologies, the network portion of which is trained on-line to compensate for the discrepancies between the linear model _ and nonlinear system. Since the network has no dynamic ·capacity, .former system outputs are used as inputs to the forward and inverse model. Due to this direct feedback, the trained structure can be tuned to perform within limits not achievable using a conventional linear system. As mentioned previously the IMC structure uses both forward and inverse models. Since the control law requires that these models are exact inverses, an iterative inversion algorithm has to be used to improve the values produced by the inverse combination model. Due to deadtimes and right-half-plane zeroes, many systems are furthermore not directly invertible. Whilst such unstable elements can be removed from mathematical models, the inverse network is trained directly from the forward model and can not be compensated. These problems could be overcome by a control structure for which only a forward model is required. The neural predictive controller (NPC) presents such a topology. Based on the optimal control philosophy, this structure uses a model to predict several future outputs. The errors between these and the desired output are then collected to form the cost function, which may also include other factors such as the magnitude of the change in input. The input value that optimally fulfils all the objectives used to formulate the cost function, can then be found by locating its minimum. Since the model in this structure includes a neural network, the optimization can not be formulated in a closed mathematical form and has to be performed using a numerical method. For the NPC topology, as for the neural network IMC structure, former system outputs are fed back to the model and again the trained network approach produces results not achievable with a linear model. Due to the single network approach, the NPC topology furthermore overcomes the limitations described for the neural network IMC structure and can be extended to include multivariable systems. This study shows that the nonlinear modelling capability of neural networks can be exploited to produce learning control structures with improved responses for nonlinear systems. Many of the difficulties described are due to the computational burden of these networks and associated algorithms. These are likely to become less significant due to the rapid development in computer technology and advances in neural network hardware. Although neural network based control structures are unlikely to replace the well understood linear topologies, which are adequate for the majority of applications, they might present a practical alternative where (due to nonlinearity or modelling errors) the conventional controller can not achieve the required control action.
- ItemOpen AccessPerformance comparison of reflector and AESA-based digital beamforming for small satellite spaceborne SAR(2019) Gema, Kevin; Inggs, Michael; Gaffar, Mohammed Yunus AbdulSpaceborne Synthetic Aperture Radar (SAR) sensors play an ever increasingly important role in Earth observation in the fields of science, geomatics, defence, commercial products and services. The user community requirements for large, high temporal and spatial resolution swaths has driven the need for low-cost, high-performance systems. The increasing availability of commercial launch vehicles shall bolster the manufacturing and industrialisation of a smaller class sensor. This work deals with the performance comparison between a small satellite class planar array and reflector antenna system. Here the focus lies on digital beamforming techniques for the operation in wide-swath, high-resolution stripmap mode. For this the sensor sensitivity and ambiguity suppression performance in range and azimuth are derived. The Jupyter notebook environment with code in the Python language served as a convenient mechanism for modelling and verifying different performance aspects. These performance metrics are simulated and verified against existing systems. The limitations the spherical Earth geometry has on the transmitter timing and the imaged scene are derived. This together with the SAR platform orbital characteristics lead to the establishment of antenna design constraints. A planar array and reflector system are modelled with common design specifications and compared to a sea ice monitoring scenario. The use of digital beamforming techniques together with a high gain reflector antenna surface provided evidence that a reflector antenna would serve as a feasible alternative to planar arrays for spaceborne SAR missions.
- ItemOpen AccessPerformance of narrow band internet of things (NBIoT) networks(2019) Bhebhe, Mbongeni; Winberg, Simon; Zaidi, YaseenNarrow Band Internet of Things (NBIoT) is a Low Power Wide Area Network (LPWAN) technology that has been standardised by 3GPP in Release 13 to work in cellular networks [15]. The main characteristics of NBIoT are its extended coverage compared to other cellular technologies such as LTE; its high capacity is due to its narrow channel bandwidth of 180 KHz, which also supports the possibility of these devices having a long battery life of up to 10 years, as well as low device complexity - all of which result in low device costs [2]. NBIoT can be deployed in one of three different options, namely: a) standalone, b) in-band and c) guard band deployment mode. These characteristics of NBIoT makes it very useful in the IoT industry, allowing the technology to be used in a wide range of applications, such as health, smart cities, farming, wireless sensor networks and many more [1] [25]. NBIoT can be used to realise the maximum possible spectral efficiency, thereby increasing the capacity of the network. Penetration of NBIoT in the market has dominated other LPWANs like Sigfox and LoRA, with NBIoT having a technology share of close to 50 percent [31]. This study is aimed at exploring the deployment options of NBIoT and determining how network operators can realise the greatest value for their investment by efficiently utilising their allocated spectrum. The main target is to derive the best parameter combination for deployment of the NBIoT network with acceptable error rates in both the uplink and the downlink. Different characteristics of NBIoT were discussed in this study, and the performance of the various approaches investigated to determine their efficiency in relation to the needs of the IoT industry. The error rates of NBIoT, when used in an existing LTE network, were the main focus of this study. Software simulations were used to compare the different parameter settings to see which options provide the best efficiency and cost trade-offs for structuring an NBIoT network. The results of the tests done in this study showed that the error rates are lower for standalone deployment mode than for in-band mode, which is mainly due to less interference in standalone mode than in in-band mode. The results also show that data transmitted in smaller Transport Block Size (TBS) in the Down Link (DL) has less errors than if it’s transmitted in larger blocks. The results also show that the error rate gets lower as the number of subframe repetition increases in the downlink, which is mainly due to the redundancy in sending the same data multiple times. However in the uplink, the results show that the error rates are comparable when the signal has poor quality.
- ItemOpen AccessPower consumption and costing of crop sensing systems for monitoring common Western Cape crops growth(2022) Damilare, Dunmoye Isaac; Winberg, Simon; Awodele, KehindeIn recent years, agricultural practices have been influenced to an ever-increasing extent by Industry 4.0 trends. While there is much fear of what this type of industrialization implies, these fears are often misplaced: full automation, where robots take over, is just one form of industrialization and is not likely to happen on a wide scale for farming contexts anytime soon. However, there is a certainty of the increasingly widespread use of methods such as accurate and large-scale sensing, tracking of production and applying data science to circumvent problems of crops and livestock health problems. This project focuses on the investigation of power consumption and costs of the farm sensing system, which is designed around these new approaches to production to propose a cost-effective solution for monitoring and controlling agricultural production, providing a comprehensive analysis of the contextual complications, the mechanism needed to realize this system and the cost and the anticipated power consumption of the system, to deliver an advisory system for farmers. The power consumed by the temperature sensor, pH sensor, and soil moisture during different stages of growth were investigated. Among the considered crops, spinach growth monitoring consumed the least amount of power during monitoring. The highest amount of power was consumed during garlic growth monitoring. Considering the time of the crop growth, spinach took just two months to be matured and requires less monitoring. The cost for monitoring spinach was $ 34.38, $ 1.13, and $ 0.44 using nickel-cadmium, solar (PV), and electric grid, respectively. The overall cost and power consumed increased with the period of germination to maturity. The highest power consumed was by garlic which took up to six months. The highest energy was consumed by the carrot's pH sensor, followed by the onion's pH sensor. It shows how important the range of carrot and onions pH should be kept at 5.0 – 6.0 and 5.5 – 6.5 respectively throughout the growing process. Hence the soil pH is most prominent in carrots, onions, and fresh green pepper. The soil moisture is slightly more prominent than the temperature in garlic, onions, spinach, and carrots while the temperature is slightly more prominent than soil moisture in sweet corn and fresh green chili pepper. With the investigated power consumed by sensors monitoring crop growth, and the cost associated with the sources of energy considered, assistive technologies can be provided which assist farmers with the existing practices.