Browsing by Subject "Engineering"
Now showing 1 - 20 of 351
Results Per Page
Sort Options
- ItemOpen AccessA CFD Model for a Fixed Bed Reactor for Fischer Tropsch Reaction using Ansys(2023) Chitranshi, Vidushi; Moller, KlausFischer-Tropsch (FT) is a process which can convert synthesis gas derived from natural gas, coal or even biomass to a variety of products including saturated and unsaturated hydrocarbon chains, while keeping the emission of greenhouse gases minimum. Among various types of reactors used for commercial FT, fixed bed tubular reactors are among the most common type of reactor. However, there is a big challenge faced by these tubular reactors. FT is a highly exothermic process and therefore, heat removal in these reactors is needed to be highly efficient to avoid a thermal runaway. To improve the heat transfer in any reactor, it is necessary to estimate the heat production correctly. Therefore, the kinetics in the FT system needs to correctly represent the heat transfer behaviour in the system. This requires an effective description of the reaction kinetics. FT is a polymerisation reaction, so the rate expressions must be able to retain the chain reaction behaviour. This is not possible with a lumped approach model which is used by most researchers in literature. Therefore, a partial equilibrium approach was employed, where thermodynamic and kinetic models were coupled, and the reaction rates depended on the concentration of reactants as well as products. The kinetic model employed in the current project was taken directly from the work of Davies and Moller. The abovementioned partial equilibrium kinetics was used to develop a CFD model for the FT reaction system. This model was reproduced using COCO simulator for a plug flow reactor for same operating conditions as Ansys to compare the results from both the softwares. The results showed a close agreement and hence, assured that the CFD model could be used for further testing. The other challenge with the FT in FBRs is the heat dissipation. To avoid the thermal runaway, some innovations in reactor design have been studied in literature. In terms of heat transfer capabilities, when shell and tube heat exchangers are compared with the plate and frame heat exchangers, various sources in literature claim that the latter is found to be more effective. However, plate type reactors have not yet been explored in detail for their heat transfer capabilities. Taking an idea from this, the CFD model developed for the tubular reactors was adapted for plate type reactors. The heat transfer capabilities of the plate type reactors were compared relative to the tubular reactors. The tube reactor and plate type reactor were compared on the basis of two criteria. One criterion was based on physical similarity between the reactors. It included having equal Reynolds Number and equal surface area available per unit volume for both type of reactors. For a plate with plate spacing t and a tube with diameter D, the latter condition resulted in the expression, D = 2t. The factors that Reynolds number for a packed bed depends on were all same for both the geometries, so by default, the Reynolds Number was identical for both cases. The other criterion was based on catalyst packing. It included having equal tube-to-particle diameter ratio for both geometries. For tube reactors, diameter is an important parameter that determines the heat dissipation behaviour, so a parametric study was carried out to study the effect of a diameter and plate spacing on heat transfer behaviour for the same set of operating conditions. It was found that the plate type reactor had a hotspot temperature which was less than the hotspot temperature of corresponding tube reactor at all plate spacings. This indicated that the heat dissipation in a plate type reactor is better than in the corresponding tube reactor. Since the tube reactors observed higher temperatures than corresponding plate reactors, the CO conversion observed in the tube reactors was higher. When the product distributions for the two geometries were compared at isothermal conditions, the results almost overlapped for the two geometries. But when they were compared for non- isothermal conditions, significant differences were observed. This showed that heat dissipation mechanisms in the system had a huge role in bringing out different performances for the two geometries. Effect of temperature and conversion on the product distribution were also studied. On the basis of tube to particle diameter ratio criterion, tube reactor was found to outperform the plate reactors in terms of temperature control when compared using the tube-to-particle diameter ratio. Therefore, the superiority of one reactor over the other was dependent on the criterion they are being compared for. The plate type reactor was then represented in PFR model by tuning the heat transfer coefficient of the tubular model in COCO. The difference between the CO conversions achieved between the plate type reactor in Ansys and the representative model in COCO was found to be very little. Hence, the plate type reactor representation could be successfully achieved in COCO. There can be a lot of further research that can be done using the current model. The areas of reaction kinetics and reactor design were highlighted in this regard. The current model can be extended to a larger number of species, for a better representation of the FT product spectrum. Formation of liquids was completely neglected in the current project. It can be taken into account as presence of liquid can affect the FT reactor system by imposing internal and external mass transfer limitations to the reactions. The current model can also be used to study the HTFT process and also to study the isomeric products in the LTFT which were assumed to be not present in the current project. In the areas of reactor design, the geometry of catalyst particles can be included in the reactor geometry. This model can also be used for plates of other shapes and sizes to study the effect of shape and size on heat transfer capabilities. Different types of corrugated plates are used in the Plate and Frame Heat exchangers nowadays. The corrugations increase the surface area available and also increase mixing. Using the current model, such modifications can be studied for their effect on the reactions in a reactive system.
- ItemOpen AccessA Critical Evaluation of the Use of Crack Width Requirements in the Durability Design of Marine Reinforced Concrete Structures(2023) Elias, Nicholas; Beushausen, Hans-DieterCrack width requirements (CWRs), which aim to limit cracks in reinforced concrete (RC) to maximum prescribed values, play a major – and often dominant – role in the design of marine RC structures. However, there are several issues with the current CWRs, chief among which is the fact that, despite decades of research, no clear relationship between crack width and steel reinforcement corrosion rate in concrete has been found. Instead, there exist two opposing schools of thought in the literature – one which says that there is a relationship between crack width and reinforcement corrosion rate, and one which argues that no such relationship exists – with good evidence to support both schools of thought. Recent research has shown that even small cracks, with widths below the required values, may lead to extensive corrosion. It is therefore uncertain whether designing for the CWRs actually improves durability and extends the service life of marine RC structures. Furthermore, the use of the CWRs, which frequently results in large increases in the required amount of reinforcing steel, may lead to significant increases in the cost and environmental impact of marine RC structures. Yet, to date, these impacts have not been quantified. In order to address these issues, this study was aimed at evaluating the effect of designing to meet the current CWRs on the durability, cost, and environmental sustainability of marine RC structures. This was done by designing two sets – one with, and one without the CWRs – of typical marine RC structural elements. Based on real industry projects, two different types of elements were designed – a crane rail beam for a coal export jetty in Matola, Mozambique, and a precast crown wall unit for a breakwater in Rupert's Bay, St. Helena Island. The designs were carried out using a combination of BS 6349 and EN 1992-1-1:2004, as these are the codes of practice typically used in the South African coastal engineering industry. The effects of designing for the current CWRs on durability, cost, and environmental sustainability were then quantified by carrying out service life modelling, life cycle cost assessments (LCCAs) and estimating embodied carbon (EC) values for the designed members. The results of the service life modelling show that, for the range of crack widths likely to occur in practice, the use of the current CWRs does not improve durability, and may even reduce service life, as they encourage the use of more, smaller diameter reinforcement bars, which has the effect of increasing corrosion rate and reducing the time taken for a critical amount of the reinforcement to be lost due to corrosion. Furthermore, the results of the LCCA and EC estimates imply that the current CWRs are not the most cost-effective method for durability design and may result in significant increases in cost and environmental impact. Taken together, these results suggest that, even if a relationship is assumed to exist between crack width and corrosion rate, the current CWRs are neither the most effective, nor efficient way of addressing the effects of cracking on the durability of marine RC structures. It is therefore recommended that the current crack width requirements should be removed from the durability design codes of practice and replaced with either a limitation on steel stress, a more lenient crack width requirement (for example, of 0.5 mm rather than 0.3 mm), or a performance-based crack width requirement, which takes better account of the complexity of cracking and its effect on durability. It is also recommended that engineers be given the option to use other methods of providing durability, such as the use of crack-sealing and waterproofing admixtures, or hydrophobic treatments, instead of the current CWRs. However, before any of these recommendations can be implemented, the results of this study need to be confirmed with further research. Owing to the limitations of both service life modelling and accelerated laboratory corrosion, it is recommended that this further research should take the form of an extensive evaluation of the durability performance of existing marine RC structures.
- ItemOpen AccessA deep learning-based approach towards automating visual reinforced concrete bridge inspections(2021) Dube, Bright N; Moyo, Pilate; Matongo, KabaniVisual inspections are fundamental to the maintenance of RC bridge infrastructure. However, their highly subjective nature often compromises the accuracy of inspection results and ultimately leads to inaccurate prioritisation of repair and rehabilitation activities. Visual inspections are also known to expose inspectors to height and trafficrelated hazards, and sometimes require the use of costly access equipment. Therefore, the present study investigated state-of-the-art Unmanned Aerial Vehicles (UAVs) and algorithms capable of automating visual RC bridge inspections in order to reduce inspector subjectivity, minimise inspection costs and enhance inspector safety. Convolutional neural network (CNN) algorithms are state-of-the-art in relation to the automatic detection of RC bridge defects. However, much of the prior research in this area focused on detecting the presence of defects and gave little to no attention to characterizing them according to defect type and degree (D) or extent (E) ratings. Four proof-of-concept CNN models were therefore developed, namely a defect-type detector, crack-type detector, exposed-rebar detector and a shrinkage crack D-rating model. Each model was built by first compiling defect images, labelling them according to defect/crack type and creating training and test sets at a 90-10% split. The training sets were then used to train the CNN models through transfer learning and fine-tuning using the fastai deep learning python library. The performance of each model was ultimately evaluated based on prediction accuracies on the test sets and their robustness to noise. Test accuracies ≥ 87% were attained by the trained models. This result shows that CNNs are capable of accurately identifying RC bridge corrosion, spalling, ASR, cracking and efflorescence, and assigning appropriate D ratings to shrinkage cracks. It was concluded that CNN models can be built to identify and allocate D and E ratings to any visible defect type, provided the requisite training data that sufficiently represents noisy real-world inspection conditions can be acquired. This formed the basis upon which a practical framework for UAV-enabled and deep learning-based RC bridge inspections was developed.
- ItemOpen AccessA finite element program for the static analysis of branched, thin shells of revolution under axisymmetric loading(1974) Griffin, T B; Doyle, WA finite element computer program which uses the conical, frustrum element is presented for the Linear elastic, static analysis of variable thickness, branched, thin shells of revolution, composed of straight sections and subject to general, axisymmetric mechanical, 1,oading. The thin she1,1, theory and finite element theory forming the basis of the analysis are described, with particular attention being given to the closing of the she l, 1, at the axis of symmetry, and shell branching. Numerous problems embodying al,1, relevant features of the program are analysed, and their solutions are discussed. A user's manual, for the program is appended, and guidelines for the efficient use of the program are given.
- ItemOpen AccessA geospatial investigation of destination choice modelling. The case of the MYCITI integrated rapid transit bus system, Cape Town, South Africa(2021) Smith, Joanet; Zuidgeest, MarcusThe transport sector plays an integral role in a country's development and economy. Optimised transport networks and infrastructure can lead to increased economic development. Effective transport networks and public transportation systems are, therefore, essential to growing the South African economy. With an increasing demand for transportation services required by the South African population, the need exists to expand the capacity of local public transport networks. With this need declared, and grants released by the government, a high demand exists for the estimation, analysis, optimisation and forecast of public transport systems in South Africa. Public transportation studies are directly related to commuter demand as a result of commuter choices. Therefore, a key component for understanding the operational functionality of a public transport system lies in the accurate modelling of commuter choices. Although the spatial separation of activities forms the essence of travel demand, incorporating the effects of geospatial properties in travel behaviour modelling has only been formally studied in recent years. These recent studies noted a trend proposing that geospatial properties can influence travel behaviour. In the stated research, the need to investigate the effect of geospatial properties on travel behaviour was highlighted. With travel behaviour being the result of commuter choices, a multinomial logit choice modelling study was conducted to investigate the effect of geospatial properties on commuter destination choice for the case of the MyCiTi Integrated Rapid Transit system in Cape Town, South Africa.
- ItemOpen AccessA Mechano-Chemical Computational model of Deep Vein Thrombosis(2021) Jimoh-Taiwo, Qudus Boluwatife; Ngoepe, MalebogoDeep Vein Thrombosis (DVT) is the formation of a blood clot in a vein, usually in the body's lower extremities. If untreated, DVT can lead to pulmonary embolism (PE), heart attack and/or stroke, which can be fatal. According to literature, DVT affects 0.2% of people in developed countries and about 0.3%-1% in developing countries. In the past, various computational models of DVT were developed. Most models account for either the mechanical factors or biochemical factors involved with DVT. Developing a model that accounts for both factors will improve our understanding of the coagulation process. This study developed a three-dimensional DVT computational model in idealized and realistic common femoral vein (CFV) geometries. The model considers the biochemical reactions between thrombin and fibrinogen, pulsatile blood flow, and clot growth within the vessel. The model was validated using a simplified experimental setup with flow, thrombin, and fibrinogen. Computational fluid dynamics (CFD) simulations were carried out using the ANSYS modelling suite. The Navier-Stokes equations were solved to determine the fluid flow. Based on a clinical dataset of pulsatile blood flow, the laminar flow of blood with a Poiseuille velocity profile was applied at the inlet. Darcy's law was used to account for porosity changes in the clot, with the clot represented by zones with lower porosities. The transport equations were used for changes in the concentration of the biochemical protein species. Thrombin was released into the bloodstream from an injury zone on the wall of the vein. The Michaelis-Menten equation was used to represent the conversion of thrombin and fibrinogen to fibrin, the final product of the coagulation process. The computational model solves the blood flow pattern proximally, locally, and distally to clot formation at the injury zone. The model also predicts the size of the clot and the rate of clot growth. The model was first developed in a two-dimensional geometry. This model was used to investigate clot formation under different cases comparing how introducing thrombin as a flux value differs from specifying it as a fixed concentration. It was confirmed that to apply the flux condition, the thrombin concentration needs to be divided by a factor derived by multiplying the area of the injury zone and the time step size. The same model was then used to conduct a parametric study to determine the effects of varying parameters such as inlet velocity, vein diameter, and peak thrombin concentration on the size and shape of clot formed. Peak thrombin concentration was the key factor driving the initiation and propagation of clot in the vein. The model was then extended to an idealized three-dimensional geometry. This computational model was validated using results from an experimental clot growth study. The experiment comprised a steady flow of fibrinogen in a cylindrical pipe, with an injection of thrombin into the flow at the injury site, resulting in fibrin formation. A qualitative comparison was then made between the experimental clot and the clot formed in silico. Although quantitative measurements were not made, there were similarities in the shapes and sizes of the clots. The validated computational model was used to compare clot formation under steady and pulsatile flow conditions. Realistic clot growth was observed and compared to the steady flow condition. It was found that a larger clot formed under pulsatile conditions. Clot formation with the presence of valve activity was also investigated. The effect of opening and closing of the valves was achieved by varying the blood flow diameter at the inlet instead of modelling the valves as solid walls and accounting for the leaflet movement by solving the governing equations for the fluid-solid interaction (FSI), as used in existing models. The model was then applied to a patient-specific geometry. Realistic clot growth was achieved using this model, and the clot was compared to a clot formed in vivo, as depicted in the original imaging scan. The model helps us better understand the clot growth process in the femoral vein on a patient-specific level. It also shows that the presence of venous valves increases the size of clot formed compared to steady flow. However, the high strain rate present makes the clot formed smaller than in standard pulsatile flow cases.
- ItemOpen AccessA methodology for the heat of immersion as a measure of wettability of mineral mixtures(2023) Magudu, Anam; Mcfadzean, Belinda; Geldenhuys ArmandThe measure of the extent to which a mineral interacts with water is called wettability and this is important in flotation processes. This is because the interactions between solid particles and liquid molecules (water) are important in understanding the flotation mechanism and achieving high recoveries. Contact angles and work of adhesion can be used to determine the physical properties of a given solid-liquid system, but there are drawbacks to these techniques. The advancement of microcalorimetry instrumentation has led to the use of heat of immersion to determine the surface wettability of solid surfaces. Several calorimetric studies have proven that the heat of immersion can be used to determine the surface wettability of minerals. Previous research within the Centre for Minerals Research (CMR) has shown that the heat of immersion can provide a reliable measure of mineral surface wettability when it is measured by precision solution calorimetry. However, this was done only for single mineral systems and its application to real ores has not been investigated in depth. In this study, the heat of immersion as a measure of wettability is applied to a simple binary mineral mixture representative of a real ore. The binary mineral mixture consists of a hydrophobic sulphide mineral, galena and a hydrophilic silicate mineral, albite. The results in this study have shown that the heat of immersion measurements present challenges such as an unexpected endothermic response. This endothermic response is attributed to the dissolution of the mineral in water. This dissolution is due to the surface ions on the mineral being exposed to the wetting liquid. In order to predict flotation response through measuring wettability, the aim is to measure only the heat of wetting, which is an exothermic response. Therefore, the dissolution process needs to be suppressed. Alternative techniques such as a solution saturated with the mineral sample, using organic liquids as the wetting liquids, and pre-coating of the mineral particles with collector were explored. From the various approaches explored to suppress the dissolution, it was observed that the saturated solution approach was an effective technique for certain minerals such as albite but was ineffective at suppressing the dissolution process across a range of mineral types. It is, therefore, an ineffective technique for exploring the heat of immersion of binary mineral mixtures. Secondly, it was observed that the collector coating approach was effective for suppressing dissolution at surface coverages above 75%. The collector coating approach is not feasible for conducting the heat of immersion measurements for the binary mineral mixtures because it only successfully suppresses dissolution at excess surface coverages that are not necessarily those at which one would choose to do the experimental work. Additionally, collector coating does not allow for the natural wettability of the uncoated minerals to be measured. Thirdly, hexane was found as a good wetting liquid for suppressing dissolution but there were some experimental difficulties that led to this liquid not being used for the binary mineral mixtures. These experimental difficulties include a premature immersion of the mineral into the wetting liquid due to the beeswax used to seal the ampoule dissolving in the hexane. Finally, hexanol was found to be a good wetting liquid in suppressing dissolution, had no associated experimental difficulties and was able to distinguish relative hydrophobicities between different mineral surfaces. It can, therefore, be used as an effective wetting liquid for mineral dissolution suppression and hydrophobicity determination. Preliminary experimental work into the feasibility of using a binary mineral mixture as a simple model ore system was performed. A linear relationship was found between the heat of immersion and the fraction of pure mineral A in a binary A + B mineral mixture. The heat of immersion could be presented in various ways depending on what data is required and desired. The surface area fraction or mass composition can be used to create the linear relationship between the heat of immersion and the composition of the binary mineral mixture. It was shown that there is a linear relationship between the heat of immersion and mass composition or surface area fraction of the binary mineral mixture. From this linear relationship, the heat of immersion of the pure minerals comprising the mixture can be extrapolated. The linear relationship based on composition provides a simple and convenient way to estimate hydrophobicity of a floatable mineral in an ore where only the mass and mineral composition of the sample is known. This could be used in flotation modelling, where valuable mineral floatability is a required input parameter. To determine the relative hydrophobicity of a binary mineral mixture in hexanol where the mass composition is unknown, the heat of immersion or heat released by the binary mineral mixture is measured and this is correlated with the mixture's mineral weight composition. This linear relationship can then be extrapolated to zero and 100% respectively to obtain the heat of immersion of the pure minerals. These values can be read off a calibration curve such as that obtained by Taguta et al. (2018) to obtain a flotation rate constant.
- ItemOpen AccessA modelling methodology to quantify the impact of plant anomalies on ID fan capacity in coal fired power plants(2020) Khobo, Rendani Yaw-Boateng Sean; Rousseau, Pieter; Gosai, PriyeshIn South Africa, nearly 80 % of electricity is generated from coal fired power plants. Due to the complexity of the interconnected systems that make up a typical power plant, analysis of the root causes of load losses is not a straightforward process. This often leads to losses incorrectly being ascribed to the Induced Draught (ID) fan, where detection occurs, while the problem actually originates elsewhere in the plant. The focus of this study was to develop and demonstrate a modelling methodology to quantify the effects of major plant anomalies on the capacity of ID fans in coal fired power plants. The ensuing model calculates the operating point of the ID fan that is a result of anomalies experienced elsewhere in the plant. This model can be applied in conjunction with performance test data as part of a root cause analysis procedure. The model has three main sections that are integrated to determine the ID fan operating point. The first section is a water/steam cycle model that was pre-configured in VirtualPlantTM. The steam plant model was verified via energy balance calculations and validated against original heat balance diagrams. The second is a draught group model developed using FlownexSETM. This onedimensional network is a simplification of the flue gas side of the five main draught group components, from the furnace inlet to the chimney exit, characterising only the aggregate heat transfer and pressure loss in the system. The designated ID fan model is based on the original fan performance curves. The third section is a Boiler Mass and Energy Balance (BMEB) specifically created for this purpose to: (1) translate the VirtualPlant results for the steam cycle into applicable boundary conditions for the Flownex draught group model; and (2) to calculate the fluid properties applicable to the draught group based on the coal characteristics and combustion process. The integrated modelling methodology was applied to a 600 MW class coal fired power plant to investigate the impact of six major anomalies that are typically encountered. These are: changes in coal quality; increased boiler flue gas exit temperatures; air ingress into the boiler; air heater inleakage to the flue gas stream; feed water heaters out-of-service; and condenser backpressure degradation. It was inter alia found that a low calorific value (CV) coal of 14 MJ/kg compared to a typical 17 MJ/kg reduced the fan's capacity by 2.1 %. Also, having both HP FWH out of service decreased the fan's capacity by 16.2 %.
- ItemOpen AccessA novel method for power system stabilizer design(2003) Chen, Lian; PetroianuPower system stability is defined as the condition of a power system that enables it to remain in a state of operating equilibrium under normal operating conditions and to regain an acceptable state of equilibrium after being subjected to a finite disturbance. In the evaluation of stability, the focus is on the behavior of the power system when subjected to both large and small disturbances. Large disturbances are caused by severe changes in the power system, e.g. a short-circuit on a transmission line, loss of a large generator or load, loss of a tie-line between two systems. Small disturbances in the form of load changes take place continuously requiring the system to adjust to the changing conditions. The system should be capable of operating satisfactorily under these conditions and successfully supplying the maximum amount ofload. Power system stability is defined as the condition of a power system that enables it to remain in a state of operating equilibrium under normal operating conditions and to regain an acceptable state of equilibrium after being subjected to a finite disturbance. In the evaluation of stability, the focus is on the behavior of the power system when subjected to both large and small disturbances. Large disturbances are caused by severe changes in the power system, e.g. a short-circuit on a transmission line, loss of a large generator or load, loss of a tie-line between two systems. Small disturbances in the form of load changes take place continuously requiring the system to adjust to the changing conditions. The system should be capable of operating satisfactorily under these conditions and successfully supplying the maximum amount ofload. This dissertation deals with the use of Power System Stabilizers (PSS) to damp electromechanical oscillations arising from small disturbances. In particular, it focuses on three issues associated with the damping of these oscillations. These include ensuring robustness of PSS under changing operating conditions, maintaining or selecting the structure of the PSS and coordinating multiple PSS to ensure global power system robustness. To address the issues outlined above, a new PSS design/tuning method has been developed. The method, called sub-optimal Hoo PSS design/tuning, is based on Hoo control theory. For the implementation of the sub-optimal Hoo PSS design/tuning method, various standard optimization methods, such as Sequential Quadratic Programming (SQP), were investigated. However, power systems typically have multiple "modes" that result in the optimization problem being non-convex in nature. To overcome the issue of non-convexity, the optimization algorithm, embedded in the 111 University of Cape Town sub-optimal Hoo PSS design/tuning method, is based on Population Based Incremental Learning (PBIL). This new sub-optimal Heo design/tuning method has a number of important features. The method allows for the selection of the PSS structure i.e. the designer can select the order and structure of the PSS. The method can be applied to the full model of the power system i.e. there is no need for using a reduced-order model. The method is based on Heo control theory i.e. it uses robustness as a key objective. The method ensures adequate damping of the electromechanical oscillations of the power system. The method is suitable for optimizing existing PSS in a power system. This method improves the overall damping of the system and does not affect the observability of the system poles. To demonstrate the effectiveness of the sUb-optimal Hoo PSS design/tuning method, a number of case studies are presented in the thesis. The sub-optimal Hoo design/tuning method is extended to allow for the coordinated tuning of multiple controllers. The ability to tune multiple controllers in a coordinated manner allows the designer to focus on the overall stability and robustness of the power system, rather than focusing just on, the local stability of the system as viewed from the generator where the controllers are connected.
- ItemOpen AccessA numerical assessment of architectural parameters for anisotropic behavior in idealised trabecular structures(2018) Moore, Keelan; Cloete, Trevor; Nurick, GeraldBones macroscopically consist of two major constituents; namely cortical and trabecular (also known as cancellous) bone. Cortical bone is the hard and dense outer layer of bone, which holds majority of the load bearing capacity. Trabecular bone is the porous internal bone, which distributes loads at joints by allowing for a larger bearing surface and acts as an energy absorber. Trabecular bone has a complex, heterogeneous, anisotropic open cell lattice structure with a large variation in mechanical properties across anatomic site, species, sex, age, normal loading direction and disease state. A common attempt to account for this variation is to correlate the structure of the trabecular bone sample to the mechanical response, which requires a means of quantifying the structure. Microstructural indices such as bone volume vs. total volume (BV/TV), trabecular thickness (Tb.Th), trabecular separation (Tb.Sp), structural modal index (SMI) and mean intercept length (MIL) have been widely used to find correlations between structure and properties. Early studies only considered densitometric indices, which accounted for much of the variation however cross study correlations did not agree, leading to an interest in capturing non-scalar valued indices to account for features such as the anisotropy of the bone. The structural anisotropy varies from fully equiaxed to highly directional based on where the trabecular bone is located and what the function would be. The mean intercept length has been proposed as a measure of the structural anisotropy, specifically the interfacial anisotropy of the sample, which is commonly used to account for the mechanical anisotropy. This research falls within a longer term goal of investigating and understanding the mechanical anisotropy of trabecular bone. To that end, the anisotropy of regular lattice structures was investigated, with the particular goal that the investigated lattices were simple analogues for the more complex structures seen in trabecular bone. A framework for assessing the structure-property relations of trabecular bone is created, with focus on anisotropy. The mechanical anisotropy of idealised trabecular structures is quantified using well known microstructural indices, which are compared to the numerically determined mechanical response. The modelling methodology initially investigated 2D lattices that have very well known responses, such that the modelled approach could be verified. Three 2D lattices were used to do this, with the aim that the 3D lattices would be their analogues. Specifically a 2D square, hexagonal and triangular lattice were investigated. The square lattice is highly anisotropic as is the cubic lattice. The hexagonal lattice is isotropic with a large constraint effect as is the Kelvin cell, and the triangular lattice is isotropic with a small constraint effect. The octet-truss was the closest analogue to the triangular lattice, having a small constraint effect and being less anisotropic than the cubic lattice. The three 3D lattices were chosen to represent highly directional trabecular bone (using a cubic lattice) and more equiaxed trabecular bone, with the fully isotropic Kelvin cell lattice (also known as a tetrakaidecahedron) and the octet-truss lattice which has a lower degree of anisotropy than the cubic. Two confinement arrangements were also investigated as analogues for the trabecular bone at the free surface and at the cortical surface. To assess the mean intercept length analysis as a measure of mechanical anisotropy, this research performed the analysis on three 3D periodic lattice structures and compared the results to mechanical properties which were numerically determined using finite element analysis. The mean intercept analysis was performed by generating 3D images for the lattices, similar to the output of (mu)CT images, using a combination of open-source software and custom code, and performing the analysis in BoneJ, an open-source software package. The mechanical response was determined using two methods, namely discrete and continuum modelling approaches. The discrete approach characterised the lattice with each strut modelled as a Timoshenko beam element solved in LS-DYNA. To capture the anisotropy, the lattice had to be loaded at arbitrary angles, which was achieved by a rotating the whole lattice and cropping it to a specified test region using custom code. The continuum modelling approach used a homogenisation approach by treating the lattice as a solid material with effective properties, this was solved in a custom implicit solver written in MATLAB using solid elements. The anisotropy was modelled by transforming the elasticity tensor to arbitrary coordinate systems to load the model in arbitrary directions. The discrete modelling approach suffered from high computational costs and difficulty in removing the boundary effects, all of which would be worsened for models of real trabecular bone. However the discrete approach did accurately captured the mechanical behaviour of the lattices tested. The continuum approach accurately captured some of the responses but failed to capture all behaviour caused by confinement. The continuum model could not capture the switch in predominant deformation mode of the 2D hexagonal lattice caused by lateral confinement, and failed to accurately capture the symmetry of the highly anisotropic 3D cubic lattice. The mean intercept length analysis failed to capture the anisotropic response of simple periodic lattices, showing no significant difference between the octet-truss and cubic lattices, despite them having a very large difference in mechanical anisotropy. It also showed that the Kelvin cell lattice had the highest degree of geometric anisotropy, which is compared to having the lowest mechanical anisotropy being the only fully isotropic 3D lattice investigated. The mechanical investigation showed that the lateral confinement has a large effect, significantly scaling the response of isotropic lattices whilst distinctly changing the anisotropic behaviour of the cubic and octet-truss lattice. The mean intercept length analysis cannot capture the mechanical confinement effect from geometry alone, and thus fails to capture the mechanical response due to confinement Overall, the continuum modelling approach showed difficulty in capturing the confinement effect in all lattices and thus a more robust method is required. The mean intercept analysis proved unsuccessful in capturing the mechanical response of three periodic idealised trabecular structures. A new microstructural index that can capture the mechanical anisotropy is required, with the ability to consider the effects of confinement on the structure.
- ItemOpen AccessA prospective comparative lifecycle assessment for green and grey hydrogen production and utilisation in the South African context(2022) Mbaba, Ongezwa; von Blottnitz, Harro; Fadiel, AhjumGreen hydrogen has gathered increasing interest as a medium in the transition to a carbonneutral economy, with several large, export-focused projects currently under development in Southern Africa. However, the environmental implications of hydrogen production and utilisation are not well understood. To address this challenge, a comprehensive literature review for hydrogen production and utilisation lifecycle assessment studies was conducted, and two prospective comparative lifecycle assessments are presented for green and grey hydrogen production and utilisation in the South African context. The first LCA aims to quantify the environmental impacts of producing green hydrogen, relative to grey hydrogen, and determine the production route with the least environmental impacts. The scenarios investigated for hydrogen production are water electrolysis powered by wind, solar PV or concentrated solar power, steam methane reforming, and water electrolysis powered by a 2040 grid electricity mix. Furthermore, the impacts of three available electrolysis technologies; viz. polymer electrolytic membrane (PEM), alkaline, and solid oxide electrolysis were compared. The second LCA aims to compare two systems of utilisation for the green hydrogen that would be produced in South Africa to determine the option where the highest level of decarbonisation could be achieved. The application considered for the assessment is the fuelling of heavy-duty truck transportation. The systems considered are local utilisation for fuelling heavy-duty trucks and hydrogen exportation to Germany also to fuel heavy-duty trucks. These two systems were expanded to include conventional fuel utilisation, making the functional units of the systems equal and thus the systems comparable. SimaPro was used to conduct the two LCAs, and the ReCiPe 2016 midpoint method was used for the lifecycle impact assessments. Grid-powered water electrolysis is found to have the highest potential impacts across most impact categories, even for the case of the significantly decarbonised 2040 grid mix, with SMR second. Solar PV-powered electrolysis leads to the highest potential human non-carcinogenic toxicity impact caused, by the supply chains of PV panels. Wind-powered water electrolysis is the least impactful option across most categories. However, it has the highest potential human carcinogenic toxicity impact among the renewable production options, though it is less than half compared to the value for non-renewable hydrogen production. This toxicity is caused by the supply chains of wind turbines. Considering optimal electrolyser utilisation, combined wind and solar PV-powered electrolysis is the best option. When comparing the water electrolysis technologies, PEM electrolysis leads to the highest environmental impacts. The energy input for production dominates all the impacts. In terms of utilisation, the environmental impact reductions achievable by the export case outweigh the environmental impact reductions achievable by using the green hydrogen locally, across all impact categories. The highest level of decarbonisation is achieved by replacing the most environmentally harmful fuel; South African coal-based diesel used to fuel heavy-duty trucks. The results of the first LCA confirm that green hydrogen is indeed significantly less environmentally impactful compared to grey hydrogen, but with one hotspot for each of the PV and wind-powered electrolysis, which require attention by project developers. The environmental impacts of all the production scenarios are dominated by the energy required for the production processes. The main finding for the second LCA is that local hydrogen utilisation for heavy-duty truck transportation leads to a larger environmental benefit compared to hydrogen exportation in the case of usage for heavy-duty truck transportation in another country. The highest level of decarbonisation is achieved by displacing South African coal-based diesel first.
- ItemOpen AccessA reduced order modelling methodology for external cylindrical concentrated solar power central receivers(2023) Heydenrych, James; Rousseau, Pieter; Du Sart ColinThe use of supercritical carbon dioxide (sCO2) power cycles for concentrated solar power (CSP) applications is becoming increasingly attractive since these cycles may offer lower capital costs and increased thermal efficiency. However, there are currently no utility-scale sCO2-CSP tower plants in operation. Therefore, to aid in the design and analysis process, there is a need to develop sufficiently accurate and computationally inexpensive models for such plants. This dissertation presents a reduced order modelling methodology for external cylindrical concentrated solar power central receivers. The methodology is built on a one-dimensional thermofluid network to model the heat transfer through the tube walls, coupled to a fluid flow network of the solar salt flowing inside the tubes. This is combined with a neural network surrogate model to determine the radiative heat flux impinging upon the tube surfaces. The receiver geometry is discretized along the height and around the circumference and each increment is represented by an equivalent thermal resistance network that represents the heat transfer within the tube walls. The heat transfer network parameters are calibrated using a detailed computational fluid dynamics model, which enables the calculation of the maximum tube wall temperatures. The heat transfer network is connected to the fluid flow network that solves the mass, energy, and momentum balance equations to determine the mass flow rates, pressure drops and temperature distributions. The radiative heat flux profile impinging on the receiver is typically calculated for a specific location and specific time of the day using a tool such as SolarPILOT. However, this can be computationally expensive since the central tower is surrounded by thousands of individual heliostats that are all sources of radiative flux, which depends on the position relative to the sun and relative to the receiver, as well as the direct normal irradiation (DNI) at that location and time. To reduce the associated computational expense, a multilayer perceptron (MLP) surrogate model is developed that allows the prediction of the flux profile for a range of plant configurations and atmospheric conditions at a specific location. The application of the methodology is demonstrated via a case study. The methodology may be used in future studies where sCO2-CSP tower plants are investigated, especially those with an interest in the detail design and analysis of the central receiver.
- ItemOpen AccessA Study Of Cyanide-Glycine Synergistic Lixiviant And The Igoli Process As Suitable Replacements For Mercury Amalgamation In Artisanal And Small-Scale Gold Mining(2023) Masuku, Wilson; Moyo, Thandazile; Petersen JoachimArtisanal and Small-scale Gold Mining (ASGM) operations are characterized by the use of rudimentary tools and technologies owing to limited access to capital. ASGM is predominantly a poverty-driven exercise practiced as a source of livelihood, typically in rural communities where people lack other employable skills. Globally, ASGM accounts for 20-25% of gold production, while at local scales, this number varies and can be as high as 65% in countries such as Ghana and Zimbabwe. Mercury is used in ASGM to capture gold from free-milling ores, in a process called mercury amalgamation. This is the go-to technology in most ASGM operations owing to its availability and ease of operation. However, mercury amalgamation has low recoveries in the range of 30-33% of gold from the otherwise rich gold ores typically mined in ASGM. In the amalgamation process, about 70% of the mercury used is lost to the environment with the amalgamation tailings and during the roasting process. Mercury is a toxic heavy metal, and mercury poisoning can lead to neurological and behavioural disorders and has been a major concern globally, leading to the signing of the Minamata Convention treaty. Mercury-free gold concentration and extraction methods such as shaking tables and roasting with borax have been put forward over the years, but their uptake has been very limited. The reasons for this poor uptake have never been systematically studied but it is thought that, among other reasons, it has to do with that some technologies are too complex for the ASGM context. Beyond the mercury-free technologies proposed for the ASGM sector, gold extraction and recovery in the large-scale mining sector has attracted researchers' attention for years, with a plethora of technologies having been proposed and tested. Little effort has been made to establish if any of these technologies could be a good fit for ASGM. In this study, two mercury-free technologies (cyanide-glycine lixiviant and the iGoli process) were tested to establish their effectiveness in the leaching of gold from ores sourced from two ASGM sites. The ores were characterized using QEMSCAN, XRD and XRF to identify mineral phases, and quartz was found to be the most dominant mineral. Sulphide minerals in both ores host the largest percentage of gold. The cyanideglycine lixiviant uses a combination of cyanide and glycine to improve gold extraction. The results from this showed that dissolution rate increases with an increase in glycine concentration in non-agitated systems at 3g/l NaCN while the reverse was true in agitated systems at the same cyanide concentration and when it was varied. The percentage of gold extracted in the non-agitated system after 72 hours was 36% at 5 g/l glycine, 21% at 2 g/l glycine and 19% when no glycine was added. In agitated systems at 5g/l cyanide, the highest extraction after 24 hours was 81% at 2 g/l glycine. Increasing glycine concentration led to lower gold extractions with 5 g/l and 10 g/l glycine extracting 74% and 68% respectively. This trend of decreasing extraction with an increase in glycine concentration was observed at different fixed cyanide concentrations i.e., at 1 g/l, 3 g/l and 5 g/l. The iGoli process uses hydrochloric acid and sodium hypochlorite to leach gold. The extractions were very low and reported below the detection limit of the analytical instrument, and thus they cannot be reported with confidence. However, iron was analyzed and showed a 55% extraction of the total iron in the ore. Results from these two technologies were compared to those of mercury amalgamation and benchmarked against the conventional cyanide process. Beyond the purely technical, a case study of two ASGM sites was done with the objective to observe and understand the day-to-day operations in a typical ASGM site and identify limitations and opportunities for mercury-free technology adoption. Based on insights drawn from the case studies, it was concluded that the cyanide-glycine lixiviant is relatively easy to implement given the current process operation in the ASGM sector which makes use of vat tanks that do not agitate the slurry (lixiviant + ore). However, the observed poor recoveries associated with the technology in non-agitated systems would be a limitation. When more profits are realized, the ASGM practitioners can upgrade to agitated systems and add hydrogen peroxide as an oxidizing agent to improve extraction.
- ItemOpen AccessA study on the effect of increased heat input on residual stress, microstructure evolution and mechanical properties in Ti6Al4V selective laser melting(2021) Motibane, Londiwe Portia; Knutsen, RobertThe Aeroswift machine is a novel high-speed powder bed fusion machine developed through a collaborative effort between the CSIR, Aerosud and the DSI. Its novelty lies in the substantial increase in build rate achieved through the implementation of a 5kW IPG laser and faster laser scanning speeds employed during processing. It is capable of producing Ti6Al4V low volume, high value and high integrity components required by the aerospace industry. Commercial selective laser melting (SLM) systems are a good benchmark for the type of quality needed in the integrity of aerospace components although they don't always meet them. The biggest difference between commercial systems and the Aeroswift machine is the amount of heat input used to make components based on the laser powers. Heat input is the ratio of the laser power to the scanning speed and it plays a role in the thermal history of a built part, its thermal gradients and therefore its residual stress. Heat input also has a big influence on the microstructure produced which determines the resultant mechanical properties. The focus of this project was to investigate the effect of increased heat input on residual stress, the development of microstructure and mechanical properties of Ti6Al4V specimens produced by the Aeroswift high (400 J/m) heat input system and commercial SLM Solution M280 low (150 J/m) heat input machine. This was to be accomplished by comparing the tested results of Aeroswift built specimens (High Heat Input) to those built by a commercial SLM machine (Low Heat Input). The effect of preheating on these properties was also studied. The low heat input specimens had two sets of test specimens, where one set was built without preheating and the other built at a preheating temperature of 200°C. This was the maximum preheating temperature for the commercial system used in this study. Firstly, the cantilever specimen were used to measure the amount of distortions that processing caused for both systems. The measured spread of the cantilever gave an indication of the amount of distortion caused by each processing condition. Distortion was found to be similar between the high heat specimen and the low heat specimen. Preheating at 200°C also did not give an appreciable difference in the amount of distortion. X-ray Diffraction was used to measure very near surface residual stresses up to a penetration depth of 5 microns. Blocks of 20X20X22 mm3 for each processing condition were used with measurements taken at the top surface center of the blocks. The very near surface stresses were higher with an increase in heat input, where high heat input specimens had average tensile residual stress in excess of 650 MPa while the low heat input specimens had average tensile residual stresses below 400 MPa. The Incremental hole drilling technique was utilised to measure the stresses in the blocks up to a depth of 1 mm from the top surface. Holes were drilled at the top surface center of each block. The stress distribution for both the high heat input specimens and the low heat specimens increased from 0.2 mm to a similar range of 500-600 MPa between 0.3 mm to 0.8 mm depth. Preheating at 200 °C yielded the same amount of stress. The microstructural analysis involved imaging from Optical Microscopy, Scanning Electron Microscopy and Electron Backscattered Diffraction. This combination of techniques confirmed a martensitic microstructure morphology of α' laths within prior β grains for all the specimens. The α' laths were arranged in the form of basket-weaves as well as colonies. The high heat input specimen prior β grains were columnar having grown across several layers in the build direction. For the low heat input specimens both with no preheating and with 200°C preheating, the prior β grains were atypically discontinuous. A hexagonal-titanium phase was identified in all the specimens as the dominant phase, with essentially no presence of the cubic phase. Dog-bone tensile specimens built in the z-direction (build direction) were used to test for static mechanical properties. The Yield Strength and the Ultimate Tensile Strength were above 1000 MPa and 1200 MPa respectively for all specimens. The average elongation of 11.2% in the low heat input specimen with no preheating was significantly higher than the 4.3% achieved by the high heat input specimen. The effect of the observed micro porosity under the microscope is thought to have contributed to this behaviour. Compact tension specimens for fracture toughness and fatigue crack growth rate testing were built in the ZX direction as per ASTM E399-17 labelling. The high heat input specimens had an average fracture toughness of 43 MPa√m compared to the less than 38 MPa√m achieved by the low heat input specimens. The high heat input specimens also had a better crack growth resistance than the low heat input specimens. The low heat input specimens without preheating had better crack initiation resistance. The results show that an increase in heat input does not have a substantial effect on the integrity and quality of parts. In fact, it produces comparable results to commercial SLM processing deployed in this study with respect to the properties studied, with the exception of a lower ductility. This brings about even more confidence on the advantage of high-speed processing. Future work should include testing at other orientations as well as testing higher preheating temperatures.
- ItemOpen AccessA study on the response of a target plate to a foreign object placed at various depths in a cylindrical Charge(2023) Hoare, Matthew; Chung, Kim Yuen Steeve; Govender Reuben AshleyThe threat of Improvised Explosive Devices (IEDs) has grown exponentially in the 21st century, as the methods and means of warfare have adapted to modern threats such as terrorism. IEDs are especially damaging and lethal because they are often randomly embedded with a variety of projectiles that consist of readily available items, such as ball bearings, nails or glass. The versatile nature of IEDs makes it very difficult to conduct a generalised study on their impact. One major challenge in IED research is the wide range of potential IED geometries, sizes, explosion types and embedded object configurations. Understanding the behaviour of a simplified IED, consisting of a blast-driven ball bearings embedded in explosive charges, will provide insights into the mechanics of IEDs and its subsequent interactions with a target with a view to developing better protection from IEDs. This dissertation presents the results of a study investigating the damage caused by a simplified IED which consists of a cylindrical explosive charge that was embedded with a single ball bearing. The influence of the placement of a ball bearing along the axis within a rear-detonated cylindrical charge was studied and the placement effects were evaluated in terms of the impact velocity of the ball bearing and its subsequent damage on a Domex 700 steel (also referred to as Strenx in Europe) target plate. Typical deformation of a structure from an IED is a result from of blast loading (pressure wave) and impact loading from the shrapnel/fragment. In this study the combined blast and impact events of a simplified IED were decoupled into separate events to gain a better understanding of the contributions of the different loading conditions. The target plates were exposed to bare charges to quantify the effects on blast loading events. Impact tests were carried out using a two-stage gas gun to relate impact velocity to the deflection of the target plate. Tests were also carried out with explosive charges embedded with a ball bearing at varying depths to analyse the combined event. For all blast tests, the charge diameter was kept constant. Three different charge masses with varying placements of the ball bearing were investigated. Computational simulations, validated using experimental data, were used to elucidate additional details to gain insight about the momentum transfer during the blast event. The results showed similar critical influence of the placement of the ball bearing relative to the charge for the different charge masses used.
- ItemOpen AccessA technical and economic feasibility study on repurposing copper mine tailings via microbial induced calcium carbonate precipitation(2021) De Oliveira, Daniel; Randall, DyllonThe current manufacturing of clay-fired and cement bricks has contributed greatly to anthropogenic global emissions and environmental damages. A possible solution that could be used to alleviate such environmental pressures is through the adoption of carbon neutral, microbial induced calcium carbonate precipitation (MICP) bio-bricks as a replacement for traditional bricks. MICP produced bio-bricks are formed by exploiting the ability of the microorganism, Sporosarcina pasteurii, to produce a biocement capable of binding sand particles (or any aggregate) together into a solid. Furthermore, such bio-bricks can be grown from otherwise ‘waste' resources such as human urine. This significantly reduces energy inputs whilst creating value by ‘upcycling' waste streams, resulting in a product which is sustainable whilst promoting the modern ethos of implementing environmentally friendly circular economies. However, the environmental benefits of MICP bio-bricks are hindered by the use of sand in their production. Sand, after water, is by volume the worlds most exploited and traded raw material and as such the supply of sand is being rapidly depleted globally. Added to this, sand extraction processes are known to cause extensive environmental damages. A possible solution to this issue is to replace the sand aggregate used to grow bio-bricks with mine tailings. The increasing global demand for metal products has resulted in the concurrent production of vast volumes of waste mine tailings which, if left untreated, pose a potential risk of leaching toxins into surrounding populations and biota. As such it was postulated that this risk to surrounding populations and the environment could be mitigated by repurposing mine tailings, as a replacement for sand, into MICP bio-bricks. Both a technical and economic study was conducted to determine the feasibility of repurposing copper mine tailings into bio-bricks. As bio-bricks were resource intensive to produce (reagents, chemicals etc.), bio-columns were used as a proxy in studying the technical feasibility of such a process. The technical aspect of this study involved characterising copper mine tailings received from Columbia in terms of physiochemical make-up, particle size distribution and the development of a MICP submergent technique used in growing the bio-columns. This was necessitated by the fact that it was noted during the characterisation of the mine tailings that the cementation media could not be pumped through the columns filled with mine tailings aggregate, resulting in the traditional pumping method used to grow MICP bio-solids being impractical. The submergent technique was used to compare the MICP efficiency of growing biocolumns from either beach sand or copper mine tailings. In addition, the toxicity of copper to S. pasteurii was investigated and an attempt was made to acclimate a culture of S. pasteurii to the copper concentration found within copper mine tailings. Furthermore, the copper mine tailings were screened to determine if there were any indigenous, anaerobic and copper tolerant ureolytic extremophiles contained within, which had the potential to grow more robust bio-columns.
- ItemOpen AccessA Test and Characterisation Facility for Cryogenic Low Noise Amplifiers(2023) Newton, Wesley; Schonken, WillemThis dissertation discusses how the receiver and the LNA contained within the receiver are the major contributors to the sensitivity. Furthermore, a method for testing and determining the equivalent noise temperature of a cryogenic LNA operating at a physical temperature of 20 K is selected and presented. This method was tested at the Klerefontein support base and the measurements allowed conclusions to be drawn that show that the uncertainty was unacceptable due to a few factors. One of the factors is the thermal gradient across the attenuator. This was investigated via a limited thermal study and a solution was proposed and implemented.
- ItemRestrictedA tri-phasic continuum model for the numerical analysis of biological tissue proliferation using the Theory of Porous Media. Application to cardiac remodelling in rheumatic heart disease(2019) Mosam, Adam; Skatulla, SebastianThis research is part of an on-going project aimed at describing the mechanotransduction of Rheumatic Heart Disease, in order to study and predict long-term effects of new therapeutic concepts to treat inflammatory heart diseases and ultimately, estimate their effectiveness to prevent heart failure. Attention is given to Rheumatic Heart Disease (RHD) - a valvular heart disease. RHD is a condition which is mostly common amongst poorer regions and mainly affects young people, of which claims approximately 250 000 lives per annum. The Theory of Porous Media (TPM) can represent the proliferative growth and remodelling processes related to RHD within a thermodynamically consistent framework and is additionally advantageous with application to biological tissue due to the ability to couple multiple constituents, such as tissue and blood. The research presented will extend an existing biphasic TPM model for the solid cardiac tissue (solid phase) saturated in a blood and interstitial fluid (liquid phase) [21], to a triphasic model with inclusion of a third nutrient phase. This inclusion is motivated by the reason to constrain the volume of the liquid phase within the system in response to the description of growth, which is modelled through a mass exchange between the solid phase and liquid phase within the biphasic model. Although the nutrient phase acts as a source for growth, the proposed mass supply function used to correlate the deposition of sarcomeres in relation to growth is predominantly mechanically driven and bears no connection to any biochemical constituent, which therefore renders the nutrient phase as a physiologically arbitrary quantity. However, the provision of the nutrient phase is a platform for the inclusion of known constituents which actively contribute towards growth, of which may be explored in future research. The triphasic model is applied to a full cardiac cycle of a left ventricle model, extracted from magnetic resonance imaging (MRI) scans of patients diagnosed with RHD.
- ItemOpen AccessA variable threshold for an energy detector using GNU radio(2018) Lechesa, Wahau Simon; Dlodlo, Mqhele ESpectrum is a natural resource and should be treated as such. Spectrum has dual use applications that range from short distance communication links such as Bluetooth to health, power systems, transport, smart city applications and space communications and exploration. Next Generation Networks (NGNs) are designed to connect millions of devices seamlessly and with high throughput rates in the aforementioned sectors and others not mentioned. The use of spectrum has to be efficiently utilized and appropriated. Cognitive radio communications serve to improve use of dwindling spectrum availability. Spectrum sensing is the first and critical technology in cognitive radio meant to determine radio parameters. Energy Detection (ED) is a spectrum sensing technology that has a low computational and operational complexity, a relatively fast spectrum sensing technique to other spectrum sensing technologies, and requires no knowledge of the primary user’s transmit signal properties such as modulation or error correction schemes. In its classical case, ED compares the signal energy received with a fixed detection threshold, estimated with an expected noise level. Noise however in practice varies randomly due to thermal variations, non-uniform movement of electrons, imperfections of semiconductor materials and external noise sources to mention a few. This creates a noise uncertainty phenomenon which negatively affects the fixed threshold approach used in classical ED. Development of an out-of-tree module for a variable threshold energy detector using the estimated noise power at each sample point is presented in this dissertation. GNU Radio software and Ettus Universal Software Radio Peripheral (USRP) hardware were used to simulate the performance of the proposed variable threshold energy detector. The Neyman-Pearson theory was adopted in achieving the proposed variable threshold energy detector. The variable threshold energy detector successfully sensed the presence of a primary user signal at 1.25% less the spectrum sensing time of the constant threshold. An ROC curve plot also showed that the proposed variable threshold energy detector had a better performance in general as opposed to the constant threshold energy detector at low signal-to-noise ratio levels.
- ItemOpen AccessA virtual element method for transversely isotropic elasticity(2018) Van Huyssteen, Daniel; Reddy, Batmanathan DayaThis work studies the approximation of plane problems concerning transversely isotropic elasticity, using a low-order virtual element method (VEM). The VEM is an alternative finite element method characterised by complete freedom in determining element geometries that are otherwise polygonal in two dimensions, or polyhedral in three. Transversely isotropic materials are characterised by an axis of symmetry perpendicular to a plane of isotropy, and have applications ranging from fibre reinforcement to biological materials. The governing equations of the transversely isotropic elasticity problem are derived and a virtual element formulation of the problem is presented along with a sample implementation of the method. This work focuses on the treatment of near-incompressibility and near-inextensibility. These are explored both for homogeneous problems, in which the plane of isotropy is fixed; and non-homogeneous problems, in which the fibre directions defining the plane of isotropy vary with position. In the latter case various options are explored for approximating the non-homogeneous terms at an element level. The VEM approximations are shown through a range of numerical examples to be robust and locking-free, for a selection of element geometries, and fibre directions corresponding to mild and strong inhomogeneity.