Browsing by Author "Mishra, Amit"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
- ItemOpen AccessAdvanced analytics for process analysis of turbine plant and components(2019) Maharajh,Yashveer; Rousseau, Pieter; Mishra, AmitThis research investigates the use of an alternate means of modelling the performance of a train of feed water heaters in a steam cycle power plant, using machine learning. The goal of this study was to use a simple artificial neural network (ANN) to predict the behaviour of the plant system, specifically the inlet bled steam (BS) mass flow rate and the outlet water temperature of each feedwater heater. The output of the model was validated through the use of a thermofluid engineering model built for the same plant. Another goal was to assess the ability of both the thermofluid model and ANN model to predict plant behaviour under out of normal operating circumstances. The thermofluid engineering model was built on FLOWNEX® SE using existing custom components for the various heat exchangers. The model was then tuned to current plant conditions by catering for plant degradation and maintenance effects. The artificial neural network was of a multi-layer perceptron (MLP) type, using the rectified linear unit (ReLU) activation function, mean squared error (MSE) loss function and adaptive moments (Adam) optimiser. It was constructed using Python programming language. The ANN model was trained using the same data as the FLOWNEX® SE model. Multiple architectures were tested resulting in the optimum model having two layers, 200 nodes or neurons in each layer with a batch size of 500, running over 100 epochs. This configuration attained a training accuracy of 0.9975 and validation accuracy of 0.9975. When used on a test set and to predict plant performance, it achieved a MSE of 0.23 and 0.45 respectively. Under normal operating conditions (six cases tested) the ANN model performed better than the FLOWNEX® SE model when compared to actual plant behaviour. Under out of normal conditions (four cases tested), the FLOWNEX SE® model performed better than the ANN. It is evident that the ANN model was unable to capture the “physics” of a heat exchanger or the feed heating process as a result of its poor performance in the out of normal scenarios. Further tuning by way of alternate activation functions and regularisation techniques had little effect on the ANN model performance. The ANN model was able to accurately predict an out of normal case only when it was trained to do so. This was achieved by augmenting the original training data with the inputs and results from the FLOWNEX SE® model for the same case. The conclusion drawn from this study is that this type of simple ANN model is able to predict plant performance so long as it is trained for it. The validity of the prediction is highly dependent on the integrity of the training data. Operating outside the range which the model was trained for will result in inaccurate predictions. It is recommended that out of normal scenarios commonly experienced by the plant be synthesised by engineering modelling tools like FLOWNEX® SE to augment the historic plant data. This provides a wider spectrum of training data enabling more generalised and accurate predictions from the ANN model.
- ItemOpen AccessCalibration of airborne L-, X-, and P-band fully polarimetric SAR systems using various corner reflectors(2017) Algafsh, Abdullah; Inggs, Michael R; Mishra, AmitSynthetic aperture radar polarimetry is one of the current developments in the field of remote sensing, due to the ability of delivering more information on the physical properties of the surface. It is known as the science of acquiring, processing and analysing the polarisation state in an electromagnetic field. The increase of information with respect to scalar radar comes at a price, not only for the high cost of building the radar system and processing the data or increasing the complexity of the design, but also for the amount of effort needed to calibrate the data. Synthetic aperture radar polarimetric calibration is an essential pre- processing stage for the correction of distortion interference which is caused by the system inaccuracies as well as atmospheric effects. Our goal, with this thesis, is to use multiple passive point targets to establish the difference between fully, and compact polarimetric synthetic aperture radar systems on both calibration, and the effects of penetration. First, we detail the selection, design, manufacture, and deployment of different passive point targets in the field for acquiring X- and P-band synthetic aperture radar data in the Netherlands. We started by presenting the selection and design of multiple passive point targets. These were a combination of classic trihedral and dihedral corner reflectors, as well as gridded trihedral and dihedral corner reflectors. Additionally, we detailed the construction of these corner reflectors. The number of constructed corner reflector totalled sixteen, where six are for X-band and six for P-band, as well as four gridded corner reflectors for X-band. Finally, we present the deployment of the corner reflectors at three different sites with carefully surveyed and oriented positions. a Then, we present the calibration of three different fully polarimetric synthetic aperture radar sensors. The first sensor is the L-band synthetic aperture radar sensor and we acquired data using two square trihedral corner reflectors. The calibration includes an evaluation of two crosstalk methods, which are the Quegan and the Ainsworth methods. The results showed that the crosstalk parameters for the Quegan method are all between -17 dB to -21 dB before calibration, while there is a small improvement in the range of 3 dB after calibration. While the Ainsworth method shows around -20 dB before calibration, and around -40 dB after calibration. Moreover, the phase, channel imbalance, and radiometric calibration were corrected using the two corner reflectors. Furthermore, the other two synthetic aperture radar sensors are X- and P-band synthetic aperture radar sensors, and we acquired polarimetric data using our sixteen corner reflectors. The calibration includes the crosstalk estimation, and correction using the Ainsworth method and the results showed the crosstalk parameters before calibration for X-band are around -23 dB, and they are around -43 dB after calibration, while crosstalk parameters before calibration for P-band are around -10 dB, and they are around -30 dB after calibration. The calibration also includes the phase, channel imbalance, and radiometric calibration, as well as geometric correction and signal noise ration measurement, for both X- and P-band. Next, we present the performance of gridded trihedral and dihedral corner reflectors using an X-band synthetic aperture radar system. The results showed both gridded trihedral and dihedral reflectors are perfect targets for correcting the amplitude compared to classical corner reflectors; however, it is not possible to use the gridded reflectors to correct the phase as we need a return from two channels to have a zero-phase difference between the polarisation channels H - V. Furthermore, we detail the compact polarimetric calibration over three com- pact polarimetric modes using a square trihedral corner reflector for the X-band dataset. The results showed no change in the π/mode while a 90ᵒ phase bias showed in the CTLR mode. Finally, the DCP mode showed a 64.43° phase difference, and it was corrected to have a zero phase, and the channel imbalance was very high at 45.92, the channels were adjusted to have a channel imbalance of 1. b Finally, an experiment to measure the penetration and reduction of P-band signal from a synthetic aperture radar system was performed using two triangular trihedral corner reflectors. Both of them have 1.5 m inner leg dimensions. The first triangular trihedral corner reflector was deployed in a deciduous grove of trees, while the other one was deployed a 10 m distance away on a grass covered field. After system calibration based on the reflector in the clear, the results showed a reduction of 0.6 dB in the HH channel, with 2.28 dB in the W channel. The larger attenuation at W is attributable to the vertical structure of the trees. Additionally, we measured the polarimetric degradation of the triangular trihedral corner reflector immersed in vegetation (trees). Further, after calibration, the co-polarisation phase difference is zero degrees for the triangular corner reflector which was outside the trees, and 62.85ᵒ for the corner reflector inside the trees. The designed and fabricated X- and P-band SAR can work operationally with the calibration parameters obtained in this thesis. The data generated through the calibration experiments can be exploited for further applications.
- ItemOpen AccessDetection and Analysis of Molten Aluminium Cleanliness Using a Pulsed Ultrasound System(2022) Matlala, Kgothatso; Mishra, AmitThis document presents the development of a solution for analysis and detection of molten metal quality deviations. The data is generated by an MV20/20, an ultrasound sensor that detects inclusions - molten metal defects that affect the quality of the product. The data is then labelled by assessing the sample using metallography. The analysis provides the sample outcome and dominant inclusion. The business objectives for the project include the real-time classification of anomalous events by means of a supervised classifier for the metal quality outcome, and a classifier for the inclusion type responsible for low quality. The adopted methodology involves descriptive, diagnostic and predictive analytics. Once the data is statistically profiled, it is standardised and scaled to unit variance in order to compensate for different units in the descriptors. Principal components analysis is applied as a dimensionality reduction technique, and it is found that the first three components account for 99.6% of the variance of the dataset. In order for the system to have predictive ability, two modelling approaches are considered, namely Response Surface Methodology and supervised machine learning. Supervised machine learning is preferred as it offers more flexibility than a polynomial approximator, and it is more accurate. Four classifiers are built, namely logistic regression, support vector machine, multi-layer perceptron and a radial basis function network. The hyperparameters are tuned using 10- fold repeated cross-validation. The multi-layer perceptron offers the best performance in all cases. For determining the quality outcome of a cast (passed or failed), all the models perform according to business targets for accuracy, precision, sensitivity and specificity. For the inclusion type classification, the multi-layer perceptron performs within 5% of the target metrics. In order to optimise the model, a grid search is performed for optimal parameter tuning. The results offer negligible improvement, which indicates that the model has reached a global maximum in the parameter optimisation in the hyperspace. It is noted that the source of variance in the inclusion type data respondent is attributed to operator error during labelling of the dataset, among several other sources of variance. It is therefore recommended that a Gage R&R be performed in order to identify sources of variation, among other improvement recommendations. From a research perspective, a vision system is recommended for assessing metal colour, texture and other visual properties in order to provide more insights. Another possible research extension recommended is the use of Fourier Transform Infrared Spectroscopy in determining signatures of the clean metal and different inclusions for detection. The project is regarded as a success, as the business metrics are met by the solution.
- ItemOpen AccessGSM based Communication-Sensor (CommSense) System(2018) Bhatta, Abhishek; Mishra, AmitUsing communication signals for radar applications has been a major area of research in radar engineering. In the recent years, due to the widely available wireless signals, a new area of research called commensal radars has emerged. Commensal radars use available wireless Radio Frequency (RF) signals to detect and track targets of interest. This is achieved by placing two antennas, one towards the transmitting base station and the other towards the surveillance area. The signal received by these two antennas are correlated to determine the location and velocity of the target. When a signal passes through a channel, it reflects off the obstacles within its path. These reflections usually degrade quality of the signal and cause interference to the telecommunication systems. To mitigate the effects of the channel on a signal these systems transmit a known bit sequence within each frame. Our goal, with this thesis, is to design and implement a working prototype of a novel architecture for the commensal radar system, which uses these known bit sequences to extract the channel information and determine events of interest. The major novelties of the system are as follows. Firstly, this system will be built upon existing communication systems using Software Defined Radio (SDR) technology. Secondly, this design eliminates the need for a reference antenna, which reduces the cost of the system and creates an opportunity to make the system portable. We name this system Communication-Sensing (CommSense). Since, our plan is to use Global System for Mobile Communication (GSM) as the parent system for the prototype development, we decide to update the name to GSM based Communication-Sensing (GSM-CommSense) system. This thesis begins with theoretical analysis of the feasibility of the GSM-CommSense system. First of all, we perform a link budget analysis to determine the power requirements for the system. Then we calculate the ambiguity function and Cram´er-Rao Lower Bound (CRLB) for a two-path received signal model. With encouraging theoretical results, we design a prototype of the system that can capture real GSM base station broadcast signals. After the design of the GSMCommSense system, we capture channel data from multiple locations with varying environmental conditions. The aim for this set of experiment is to be able to distinguish between different environmental conditions. Then, we performed statistical analysis on the data by means of Probability Density Function (PDF) fitting, a goodness-of-fit test called chi-square test and a clustering algorithm called Principal Components Analysis (PCA). We have presented the results from each analysis and discussed them in detail. Upon, receiving positive results in each step we have decided to move towards using learning algorithms to categorise the data captured by the system. We have compared two widely accepted supervised learning algorithms, called Support Vector Machines (SVM) and Multi-Layer Perceptron (MLP). The results showed that with the current hardware capabilities of the system and the amount of data available per GSM frame, the performance of SVM is better than MLP. Thus, we have used SVM to classify two events of detection and classification across a wall. We have presented our findings and discussed the results in detail. We conclude our current work and provide scope for future work in development and analysis of the GSM-CommSense system.
- ItemOpen AccessInvestigating the use of hopped frequency waveforms for range and velocity measurements of radar targets(2014) Kathree, Umur; Mishra, AmitIn the field of radar, High Range Resolution (HRR) profiles are often used to improve tracking accuracy in range and to allow the radar system to produce an image of an object. This work focuses on the use of HRR profiles generated using a sub-class of HRR techniques termed hopped frequency and stepped frequency waveforms. These wideband waveforms are usually synthesised by combining the spectra of the transmitted pulses in the burst [1]. When used with hopped frequency waveforms, this adds the advantage of range-Doppler decoupling and robustness against electronic countermeasures (ECM) [2, 3]. However these waveforms suffer from high levels of sidelobes [4] and improving the spurious free dynamic range (SFDR) of target measurements was required. This was done with the CLEAN technique [5] which could reduce sidelobe levels to below -60 dB and would allow smaller targets masked by the sidelobes to be uncovered. To analyse the practicality of this work, simulated rotating scatterers were used and these techniques could perform adequately for signal to noise ratios (SNR) above -10 dB and signal to clutter ratios (SCR) above 14 dB. Clutter mitigation is to be investigated in future work to make it applicable to lower SCR and sub-clutter visibility.
- ItemOpen AccessLPI Air Defence Noise Radar(2022) Molope, Lazarus Molahlegi; Schonken, Willem; Mishra, AmitThis dissertation is for researching the feasibility of a Low Probability of Intercept (LPI), Air Defence (AD) Noise Radar with high range and doppler resolution. The research is approached by first simulating an S-Band LPI Noise Radar detecting a flying target and determining its range and velocity. The simulated Noise Radar is then implemented in the Universal Software Radio Peripheral (USRP) B210 H/W and tested against flying targets in monostatic mode. The actual results from H/W detection of real airborne targets are finally compared with simulated results.
- ItemOpen AccessMachine Learning for Radio Frequency Interference Flagging(2021) Harrison, Kyle; Mishra, Amit; Taylor, RussThe field of radio frequency interference (RFI) flagging involves the identification of corrupted data within radio astronomy measurements. This work explores the application of supervised machine learning algorithms for RFI flagging, trained on real measurement data and simulated data with simulated RFI. The goal of this work is to investigate the prediction of RFI using specific machine learning algorithms; Naive Bayes Classifier, K-Nearest Neighbours Classifier, Random Forest Classifier, the U-Net convolution neural network and the Multilayer Perceptron. These algorithms are trained on real data, in which the ground truth includes inherent false positives, and simulated data where the ground truth positions of RFI are absolute. This is done through the use of time/frequency spectrogram data, relating to radio astronomy measurements, using the magnitudes and phases of each available polarization. Predictions for unseen test data are compared between algorithms, different implementations of those algorithms and each dataset. A specific implementation for data pre-processing is designed and implemented, utilizing a two dimensional filtering technique for feature construction. The goal of this method is intended to implement a means of injecting a form of spatial information of nearby time/frequency samples for each sample in a spectrogram. The inclusion of this spacial information, which is relevant to broadband bursts and narrowband persistent RFI, is hypothesised to increase the level of information present in the processed dataset. The use of feature construction using filtering techniques, demonstrates a noticeable improvement in the machine learning methods where each sample is treated individually during training and inference.
- ItemOpen AccessOptimisation of Rail-road Level Crossing Closing Time in a Heterogenous Railway Traffic: Towards Safety Improvement - South African Case Study(2020) Tshaai, Dineo Christina; Mishra, Amit; Boje, EdwardThe gravitation towards mobility-as-a service in railway transportation system can be achieved at low cost and effort using shared railway network. However, the problem with shared networks is the presence of the level crossings where railway and road traffic intersects. Thus, long waiting time is expected at the level crossings due to the increase in traffic volume and heterogeneity. Furthermore, safety and capacity can be severely compromised by long level crossing closing time. The emphasis of this study is to optimise the rail-road level crossing closing time in order to achieve improved safety and capacity in a heterogeneous railway network. It is imperative to note that rail-road level crossing system assumes the socio-technical and safety critical duality which often impedes improvement efforts. Therefore, thorough understanding of the factors with highest influence on the level crossing closing time is required. Henceforth, data analysis has been conducted on eight active rail-road level crossings found on the southern corridor of the Western Cape metro rail. The spatial, temporal and behavioural analysis was conducted to extract features with influence on the level crossing closing time. Convex optimisation with the objective to minimise the level crossing closing time is formulated taking into account identified features. Moreover, the objective function is constrained by the train's traction characteristics along the constituent segments of the rail-road level crossing, speed restriction and headway time. The results show that developed solution guarantees at most 53.2% and 62.46% reduction in the level crossing closing time for the zero and nonzero dwell time, respectively. Moreover, the correctness of the presented solution has been validated based on the time lost at the level crossing and railway traffic capacity consumption. Thus, presented solution has been proven to achieve at most 50% recovery of the time lost per train trip and at least 15% improvement in capacity under normal conditions. Additionally, 27% capacity improvement is achievable at peak times and can increase depending on the severity of the headway constraints. However, convex optimisation of the level crossing closing time still fall short in level crossing with nonzero dwell time due to the approximation of dwell time based on the anticipated rather than actual value.
- ItemOpen AccessA scalable real-time processing chain for radar exploiting illuminators of opportunity(2014) Tong, Craig Andrew; Inggs, Michael; Mishra, AmitThis thesis details the design of a processing chain and system software for a commensal radar system, that is, a radar that makes use of illuminators of opportunity to provide the transmitted waveform. The stages of data acquisition from receiver back-end, direct path interference and clutter suppression, range/Doppler processing and target detection are described and targeted to general purpose commercial off-the-shelf computing hardware. A detailed low level design of such a processing chain for commensal radar which includes both processing stages and processing stage interactions has, to date, not been presented in the Literature. Furthermore, a novel deployment configuration for a networked multi-site FM broadcast band commensal radar system is presented in which the reference and surveillance channels are record at separate locations.
- ItemOpen AccessSHARC Buoy: Robust firmware design for a novel, low-cost autonomous platform for the Antarctic Marginal Ice Zone in the Southern Ocean(2021) Jacobson, Jamie Nicholas; Verrinder, Robyn; Mishra, Amit; Vichi, MarcelloSea ice in the Antarctic Marginal Ice Zone (MIZ) plays a pivotal role in regulating heat and energy exchange between oceanic and atmospheric systems, which drive global climate. Current understanding of Southern Ocean sea ice dynamics is poor with temporal and spatial gaps in critical seasonal data-sets. The lack of in situ environmental and wave data from the MIZ in the Antarctic region drove the development of UCT's first generation of in situ ice-tethered measurement platform as part of a larger UCT and NRF SANAP project on realistic modelling of the Marginal Ice Zone in the changing Southern Ocean (MISO). This thesis focuses on the firmware development for the device and the design process taken to obtain key measurements for understanding sea ice dynamics and increasing sensing capabilities in the Southern Ocean. The buoy was required to survive the Antarctic climate and contained a global positioning system, temperature sensor, digital barometer and inertial measurement unit to measure waves-in-ice. Power was supplied to the device by a power supply unit consisting of commercial-grade batteries in series with a temperature-resistant low dropout regulator, and a power sensor to monitor the module. A satellite modem transmitted data through the Iridium satellite network. Finally, Flash chips provided permanent data storage. Firmware and peripheral driver files were written in C for an STMicroelectronics STM32L4 Arm-based microcontroller. To optimise the firmware for low power consumption, inactive sensors were placed in power-saving mode and the processor was put to sleep during periods of no sampling activity. The first device deployment took place during the SCALE winter expedition in July 2019. Two devices were deployed on ice floes to test their performance in remote conditions. However, due to mechanical and power errors, the devices failed shortly after deployment. A third device was placed on the deck of SA Aghulas II during the expedition and successfully survived for one week while continuously transmitting GPS coordinates and ambient temperature. The second generation featured subsequent improvements to the mechanical robustness and sensing capabilities of the device. However, due to the 2020 COVID-19 pandemic, subsequent Antarctic expeditions were cancelled resulting in the final platform evaluation taking place on land. The device demonstrates a proof of concept for a low-cost, ice-tethered autonomous sensing device. However, additional improvements are required to overcome severe bandwidth and power constraints.
- ItemOpen AccessThe design of a two-element radio interferometer using satellite TV equipment(2021) Latief, Tauriq; Winberg, Simon; Mishra, AmitThis research presents the design of a two-element radio interferometer capable of performing complex correlation. With the development of sophisticated radio astronomy instruments, particularly in South Africa, there is a need to develop an affordable educational instrument which can be used to demonstrate the fundamental concepts of radio interferometry to university students. The mass production of satellite TV equipment has resulted in relatively sensitive radio frequency (RF) equipment such as parabolic reflector dishes and low-noise block down-converters (LNBs) being available at significantly reduced costs. This served as the front-end of the interferometer which was used to observe the sun between 10.70 GHz - 12.75 GHz (RF). The LNB then down-converted these to an intermediate frequency (IF) between 0.95 GHz - 2.15 GHz. The LNBs were modified to make use of a common 25 MHz reference, which ensured that the observed fringes were only as a result of the source's geometric time delay. A power detector was also designed since the adding interferometer architecture was chosen. This power detector included the Analog Devices LT 5534 power detector integrated circuit (IC) and a Teensy 3.6 microcontroller. The calibrated power detector could detect signals as weak as - 60 dBm and showed less than 21 mV error in output for input signals in the range [- 50 dBm, -30 dBm]. The modified LNBs experienced issues, in particular the presence of a spurious LO signal, which distorted initial observations of the sun. This was resolved by the design and manufacture of narrowband hairpin filters and quarterwavelength stub filters which were used to isolate the IF band between 1.05 GHz - 1.15 GHz (corresponding RF between 10.80 GHz - 10.90 GHz). This also improved the interferometer's resolution. A series of filter-integrated Wilkinson power dividers and branchline couplers were designed to filter and further separate signals into in-phase and quadrature-phase (I-Q) components - these were required for complex correlation. The integrated quarter-wavelength stub filter and Wilkinson power divider achieved a maximum amplitude imbalance of 0.13 dB and phase imbalance of 0.9◦ between output ports. The integrated quarter-wavelength stub filter and branchline coupler achieved a maximum amplitude imbalance of 0.13 dB and phase imbalance of 91.1◦ between output ports. These results closely agreed with the simulated performance. First light was observed on the 5th December 2020 when the sun was successfully detected using the coherent two-element interferometer along a 1.1 m baseline. Other tests included using the observed fringe phase to verify the physical baseline. A theoretical baseline of 1.11 m was calculated for a physical baseline of 1.3 m indicating an error of less than 0.2 m. The sun's fringe frequency and amplitude was also observed for varying baselines - the sun was resolved along a 3 m baseline. Finally, full-system observations of the sun were conducted. These included observing the sun's cosine and sine fringes, which indicated that the analogue complex correlator was operating correctly. Thus, the primary goal of this project had been fulfilled. Specifically, developing a low-cost, educational two-element radio interferometer capable of detecting the sun.
- ItemOpen AccessTime domain classification of transient RFI(2019) Czech, Daniel Josef; Inggs, Michael; Mishra, AmitSince the emergence of radio astronomy as a field, it has been afflicted by radio frequency interference (RFI). RFI continues to present a problem despite increasingly sophisticated countermeasures developed over the decades. Due to technological improvements, radio telescopes have become more sensitive (for example, MeerKAT’s L-band receiver). Existing RFI has become more prominent as a result. At the same time, the prevalence of RFI-generating devices has increased as new technologies have been adopted by society. Many approaches have been developed for mitigating RFI, which are typically used in concert. New telescope arrays are often built far from human habitation in radio-quiet reserves. In South Africa, a radio-quiet reserve has been established in which several world class instruments are under construction. Despite the remote location of the reserve, careful attention is paid to the possibility of RFI. For example, some instruments will begin observations while others are still under construction. The infrastructure and equipment related to the construction work may increase the risk of RFI, especially transient RFI. A number of mitigation strategies have been employed, including the use of fixed and mobile RFI monitoring stations. Such stations operate independently of the main telescope arrays and continuously monitor a wide bandwidth in all directions. They are capable of recording spectra and high resolution time domain captures of transient RFI. Once detected, and if identified, an RFI source can be found and dealt with. The ability to identify the sources of detected RFI would be highly beneficial. Continuous wave intentional transmissions (telecommunication signals for example) are easily identified as they are required to adhere to allocated frequency bands. Transient RFI signals, however, are significantly more challenging to identify since they are generally broadband and highly intermittent. Transient RFI can be generated as a by-product of the normal operation of devices such as relays, AC machines and fluorescent lights, for example. Such devices may be present near radio telescope arrays as part of the infrastructure or equipment involved in the construction of new instruments. Other than contaminating observation data, transient RFI can also appear to have genuine astronomical origins. In one case, transient signals received from a microwave oven exhibited dispersion, suggesting a distant source. Therefore, the ability to identify transient RFI by source would be enormously valuable. Once identified, such sources may be removed or replaced where possible. Despite this need, there is a paucity of work on classifying transient RFI in the literature. This thesis focusses on the problem of identifying transient RFI by source in time domain data of the type captured by remote monitoring stations. Several novel approaches are explored in this thesis. If used with independent RFI monitoring stations, these approaches may aid in tracking down nearby RFI sources at a radio telescope array. They may also be useful for improving RFI flagging in data from radio telescopes themselves. Distinguishing between transient RFI and natural astronomical signals is likely to be an easier prospect than classifying transient RFI by source. Furthermore, these approaches may be better able to avoid excising genuine astronomical transients that nevertheless share some characteristics with RFI signals. The radio telescopes themselves are significantly more sensitive than RFI monitoring stations, and would thus be able to detect RFI sources more easily. However, terrestrial RFI would likely enter via sidelobes, tempering this advantage somewhat. In this thesis, transient RFI is first characterised, prior to classification by source. Labelled time-domain recordings of a number of transient RFI sources are acquired and statistically examined. Second, components analysis techniques are considered for feature selection. Cluster separation is analysed for principal components analysis (PCA) and kernel PCA, the latter proving most suitable. The effect of the supply voltage of certain RFI sources on cluster separation in the principal components domain is also explored. Several na¨ıve classification algorithms are tested, using kernel PCA for feature selection A more sophisticated dictionary-based approach is developed next. While there are variations in repeated recordings of the same RFI source, the signals tend to adhere to a common overarching structure. Full RFI signals are observed to consist of sequences of individual transients. An algorithm is presented to extract individual transients from full recordings, after which they are labelled using unsupervised clustering methods. This procedure results in a dictionary of archetypal transients, from which any full RFI sequence may be represented. Some approaches in Automated Speech Recognition (ASR) are similar: spoken words are divided into individual labelled phonemes. Representing RFI signals as sequences enables the use of hidden Markov models (HMMs) for identification. HMMs are well suited to sequence identification problems, and are known for their robustness to variation. For example, in ASR, HMMs are able to handle the variations in repeated utterances of the same word. When classifying the recorded RFI signals, good accuracy is achieved, improving on the results obtained using the more na¨ıve methods. Finally, a strategy involving deep learning techniques is explored. Recurrent neural networks and convolutional neural networks (CNNs) have shown great promise in a wide variety of classification tasks. Here, a model is developed that includes a pre-trained CNN layer followed by a bidirectional long short-term memory (BLSTM) layer. Special attention is paid to mitigating class imbalance when the model is used with individual transients extracted from full recordings. High classification accuracy is achieved, improving on the dictionary-based approach and the other na¨ıve methods. Recommendations are made for future work on developing these approaches further for practical use with remote monitoring stations. Other possibilities for future research are also discussed, including testing the robustness of the proposed approaches. They may also prove useful for RFI excision in observation data from radio telescopes.
- ItemOpen AccessUser Logic Development for The Muon Identifier's Common Readout Unit for the ALICE Experiment at The Large Hadron Collider(2021) Boyles, Nathan; Buthelezi, Zinhle; Winberg, Simon; Mishra, AmitThe Large Hadron Collider at CERN is set to undergo major upgrades starting in 2019 resulting in expected centre of mass energy for proton-proton collisions to be the nominal 14 TeV. In light of these upgrades the experiments, namely, ALICE, ATLAS, CMS and LHCB, are required to upgrade their detectors correspondingly. The work contained in this dissertation pertains to the upgrade of the ALICE detector and in particular to the Muon Trigger(MTR) Detector which will be renamed the Muon Identifier (MID). This detector has historically operated in triggered readout manner exchanging trigger signals with the Central Trigger Processor (CTP) when events of interest occur using a minimum bias. The upgrades include a transition from triggered readout to continuous readout time delimiting data payloads using periodic heartbeat signals. Continuous readout however results in data rates several orders of magnitude higher than previous operstion and as such would require vast storage resources for raw data thus a new computing system known as O2 is also being developed for real-time data reduction. Part of the system used to perform real-time data reduction is based on FPGA technology and is known as the Common Readout Unit. As its name implies, the CRU is common to many detectors regardless of their differences in design. As such, each detector requires customer logic to meet their unique requirements, known as the user logic. This project concerns development of the ALICE MID user logic which will interface to the Core CRU firmware and perform real-time data extraction, reformatting, zero suppression, data synchronization and transmission of the processed data to the Core CRU firmware. It presents the development of a conceptual design and a prototype for the user logic of the ALICE MID. The research methodology employed involved the identification of relevant documentation as well as in-depth meetings with the developers of the periphery systems to ascertain requirements and constraints of the project. The resulting prototype shows the ability to meet the established requirements in effective and optimized manner. Additionally, the modular design approach employed, allows for more features to be easily introduced.
- ItemOpen AccessWhistler Waves Detection - Investigation of modern machine learning techniques to detect and characterise whistler waves(2021) Konan, Othniel Jean Ebenezer Yao; Mishra, Amit; Lotz, StefanLightning strokes create powerful electromagnetic pulses that routinely cause very low frequency (VLF) waves to propagate across hemispheres along geomagnetic field lines. VLF antenna receivers can be used to detect these whistler waves generated by these lightning strokes. The particular time/frequency dependence of the received whistler wave enables the estimation of electron density in the plasmasphere region of the magnetosphere. Therefore the identification and characterisation of whistlers are important tasks to monitor the plasmasphere in real time and to build large databases of events to be used for statistical studies. The current state of the art in detecting whistler is the Automatic Whistler Detection (AWD) method developed by Lichtenberger (2009) [1]. This method is based on image correlation in 2 dimensions and requires significant computing hardware situated at the VLF receiver antennas (e.g. in Antarctica). The aim of this work is to develop a machine learning based model capable of automatically detecting whistlers in the data provided by the VLF receivers. The approach is to use a combination of image classification and localisation on the spectrogram data generated by the VLF receivers to identify and localise each whistler. The data at hand has around 2300 events identified by AWD at SANAE and Marion and will be used as training, validation, and testing data. Three detector designs have been proposed. The first one using a similar method to AWD, the second using image classification on regions of interest extracted from a spectrogram, and the last one using YOLO, the current state of the art in object detection. It has been shown that these detectors can achieve a misdetection and false alarm rate, respectively, of less than 15% on Marion's dataset. It is important to note that the ground truth (initial whistler label) for data used in this study was generated using AWD. Moreover, SANAE IV data was small and did not provide much content in the study.
- ItemOpen AccessWhite RHINO: a low-cost communications radar hardware platform(2014) Hazarika, Ojonav; Mishra, AmitThe Electromagnetic spectrum has always been a very expensive resource and hence, has not been accessible to everyone. Yet, it is under-utilized. The new Whitespace Technology standards provide an efficient way to use the spectrum. However, the concept of shared spectrum introduced by the Whitespace Technology promises to reduce the cost of accessing the spectrum by a huge margin. Also, because the standards utilize the television channels, the VHF and UHF frequencies facilitate wireless transmission over large distances. This has provided impetus to various application developers. Using Whitespace Technology for Communications Radar is one such novel application which has great benefits for the African scenario. Here, the population is scattered and infrastructure for navigation and tracking is inadequate. But, there is a shortage of low-cost commercially available hardware platforms tailored for the application. In order to boost Whitespace-based Communications Radar application development, the White RHINO(Reconfigurable Hardware Interface for computation and radio) hardware platform was developed. It aims to fill the gap of low-cost commercial hardware platforms available for Whitespace-based Communications Radar. Being a Communications Radar platform, the White RHINO had to be designed keeping the standards and regulating body norms as yardsticks. However, an achievable radar performance of the platform under various scenarios was also estimated. The White RHINO contains an FPGA (the Zynq7000 series) which has dual embedded ARM processing cores. For the wireless interface, it contains a field programmable RF transceiver and an RF frontend section. The platform contains wired networking capability of 2 Gbps. The platform also has 512 MB DDR3 and 128 Mbit NAND ash as onboard memory. Finally, it has USB host, SDIO and JTAG for programmability and temperature sensors for system monitoring. The manufactured boards were tested under lab environment. It was found that except a failure on the RF transceiver section (due to a PCB footprint error), other interfaces were functional. The White RHINO successfully runs both U-Boot and Linux as operating systems. The error and other minor bugs have been corrected for the next fabrication run. Also, the cost of the complete White RHINO system is less than 1000 USD which makes it a very powerful platform and yet, less expensive than most of the commercially available platforms designed for similar applications.