Browsing by Author "Herman, Ronald"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
- ItemOpen AccessApplying the Herman-Beta probabilistic method to MV feeders(2015) Chihota, Munyaradzi Justice; Gaunt, C Trevor; Herman, RonaldThe assessment of voltage drop in radial feeders is an important element in the process of network design and planning. This task is however not straight forward as the operation of modern power systems is highly influenced by a variety of uncertain and random variables such as stochasticity in load demand and power generation from renewable energy resources. Classic deterministic methods which model load demand and generation with fixed mean values consequently turn out to be inadequate and inaccurate tools for the analysis of power flow in the uncertainty-filled system. Statistically based methods become more suitable for such a task as they account for input variable uncertainties in their calculation of load flow. In the South African context, the Herman Beta algorithm, a probabilistic load flow tool developed by Herman et al. was adopted as the method for voltage assessment in Low Voltage (LV) network. The method was shown to have significant advantages compared with many other probabilistic methods for LV feeders, as investigated by Sellick and Gaunt. Its performance with regards to speed and accuracy is superior to deterministic, numeric probabilistic and other analytical probabilistic methods. The evolving connections of smaller generators, referred to as Distributed Generators (DGs), to the utility grid inspired the extension of the HB algorithm to active LV distribution networks. The HB algorithm was however formulated specifically for LV feeders. The assumptions of purely resistive feeders and unity power factor loads make it unsuitable for the Medium Voltage (MV) distribution network. In South Africa, deterministic methods are still being used for network design in MV distribution networks. This means that the drawbacks of such methods, for example inaccuracy and computational burden with large systems, are characteristic of the quality of network design in MV feeders. The performance of the HB algorithm together with the advantages and superiority of load modelling using the Beta probability density function (Beta pdf) suggested that modifying the input parameters could allow the HB algorithm to be used for voltage calculations on MV networks. This work therefore involves the adaptation of the way the HB algorithm is used, to make it suitable for voltage calculations on MV feeders. The HB algorithm for LV feeders is firstly analysed, coded into MATLAB, tested and then validated. Following this, the input parameters for feeder impedance and load current are modified to include the effects of reactance and non-unity power factor loads, using approximate modelling techniques. For reactance, the modulus or absolute value of the complex impedance is used in place of the resistance, to compensate for the line reactance. The load current is adjusted by inflating it by the power factor. The results of calculations with the HB algorithm are tested against a Monte-Carlo Simulation (MCS) solution of the feeder with an accurate model (full representation of feeder impedance and load power factor). The approach is extended to include shunt capacitor connections and DG in voltage calculations using the HB algorithm and testing the results with MCS. The outcomes of this research are that the approach of adjusting the input parameters of line resistance and load current significantly improves the accuracy of calculations using the HB algorithm for MV feeders. Comparison with the results of MC simulations indicates that the error of voltage calculations on MV feeders will be less than 2% of the 'accurate probabilistic value'. However, it is not possible to predict the error for a particular application.
- ItemOpen AccessCost comparison of hydropower options for rural electrification in Rwanda(2012) Nsengiyumva, Anicet; Gaunt, C Trevor; Herman, RonaldThe decision to develop a hydropower plant depends on several factors, which cost is the most significant. This thesis, entitled "Cost comparison of hydropower options for rural electrification in Rwanda" intends to show that the use of a large number of mini hydropower plants for electrification of sparse rural areas in Rwanda is the least cost option when compared to installing either a single small or large hydropower plants. This is done by considering rural households to be randomly distributed and the model composed by 98 rural villages having three different population densities is used to test the validity of the hypothesis. Three different hydropower options providing the same level of service to rural households were used for the cost comparison. The relationship between the electrification cost per household versus the population density is deduced. Many distribution technologies can be used in rural areas and the accurate selection of the appropriate distribution technology is the main concern as it affects the cost of the whole distribution system. The rural network should be carefully designed so that the sizing of conductors to be used for LV and MV network is properly done at a low cost. The high distribution cost depends largely on the size of power to be delivered. Based on these findings, the cost comparison of mini, small and large hydropower schemes for rural electrification in Rwanda is discussed.
- ItemOpen AccessDesigning low voltage feeders to meet quality of supply specifications for voltage variations(2012) Kadada, Holiday C; Gaunt, C Trevor; Herman, RonaldThe provision of electricity has become a global necessity. In the developing world, residential electrification has become a tool for poverty alleviation. Unfortunately connecting residential customers to the grid, particularly in the low income communities, is more of a social task as the expected returns from the investment are unlikely to cover the costs to electrify and supply the communities. In such cases it is necessary to not over- or under-design a low voltage (LV) distribution network as this leads to unnecessary capital expenditure. The main source of uncertainty in designing LV residential distribution networks has been found to be the mode used to model the residential load. Residential electricity demand is a stochastic parameter dependant on the behaviour and occupancy patterns of household occupants. Traditionally the After Diversity Maximum Demand (ADMD), which is in essence and average value of load per household, was used to model load. However, using a singular value to describe the complex random nature of load is misleading. Probabilistic methods have been adopted to model residential load behaviour as these methods are better suited to representing the stochastic nature of the load. The Beta probability function was found to be the best representative function of residential load as its characteristics were reflective of the attributes of residential load. Studies on pre-existing LV networks in South Africa have found that these networks are operating outside of Quality of Suppy (QoS) regulation. The current QoS guideline of South Africa NRS 048-2 stipulates that 5% of measured supply voltage levels measured during a certain period are allowed to be outside the QoS compliance limits. This means that 95% QoS compliance of supply voltage levels is required for all LV networks. This QoS condition has not currently been worked into the design parameters. If a network is operating out of QoS guidelines a network upgrade is necessary. This research showed that the main source of the QoS violations of these networks was due to the risk levels used to calculate the expected voltage drops during the design stage of the networks. Typically, 10% risk is used for voltage drop calculations. This means that a best case of 90% compliance is expected which is outside the 95% compliance limit required by NRS 048- This study focused on two objectives. The first was to derive design parameters that are representative of residential load and can be used to design LV networks that comply with QoS specifications. The second was to define a means or develop a model for LV network designers to distinguish the parameters appropriate for a design, based on the customer class to be electrified. In this investigation new design parameters were derived that incorporate the 95% compliance limit of NRS 048-2 allowing LV networks built based on the new parameters, to operate within QoS limits. The parameters were derived using residential load data collected in South Africa since the early 1990's. An equation was also derived which allows countries with only ADMD data available to calculate QoS design parameters suitable for their situation.
- ItemOpen AccessIdeal hydrocracking catalysts for the conversion of FT wax to diesel(2014) Ndimande, Conrad; Gaunt, C Trevor; Herman, Ronald; Böhringer, WalterThe Fischer-Tropsch wax synthesis process and the subsequent upgrade of the wax to useful distillate fuels by mild hydrocracking is a well-known, economically viable method of producing liquid fuels, in particular diesel fuel. This project seeks to develop an ideal hydrocracking catalyst (i.e. a hydrocracking unit in which only primary cracking occurs) for the conversion of Fischer-Tropsch (FT) wax to diesel and to determine the effect of carbon monoxide on the activity and selectivity of the hydrocracking catalyst for possible integration of low temperature FT wax synthesis with wax hydrocracking into a single stage. Theoretically, a combination of the Fischer-Tropsch unit with an ideal hydrocracking unit can produce diesel yields of up to 80 wt. A non-ideal hydrocracking catalyst would lower the middle distillate yields due to the occurrence of secondary cracking. Primary cracking of the paraffins produced by the low temperature FT process occurs only when the activity of the metal is high and the rate limiting step occurs on the acid site. Integrating the wax synthesis process with the subsequent work up of the wax to produce distillate fuels is not without challenges, mainly the low reaction temperature and pressure (225°C and 20 bar), in which the hydrocracking catalyst is to operate. Noble metals, combined with zeolites are known to be active for hydrocracking at such conditions. Carbon monoxide, a feedstock of the FT process poisons noble metal catalysts; therefore knowledge of its effect on the hydrocracking catalysts performance is essential. The hydrocracking catalysts were tested when the metal and the acid sites were segregated (i.e. the metal supported on an inert carrier, physically mixed with the zeolite), and when the two sites are in close proximity (i.e. the metal impregnated into the zeolite). The tests were carried out both in the presence and absence of CO consistent with the FT feed ratio. The noble metals, Rh, Ru and Pd were used as co-catalysts to H-MFI-90. It was found that the physical distance between the metal and the acid sites has disturbs the balance of the two sites by introduction of a transport steps, this seen through both the activity and selectivity of the catalyst. Pd exhibited higher activity than Rh and Ru. Primary cracking was found to be unattainable when the metal and the acid sites are segregated. When the metal and the acid sites were in close proximity (impregnated catalyst), near primary hydrocracking performance was observed at metal loading of 0.9 wt Pd. Secondary cracking was aggravated upon the introduction of CO on both the segregated and impregnated catalyst.
- ItemOpen AccessReliability cost and worth assessment of industrial and commercial electricity consumers in Cape Town(2010) Dzobo, Oliver; Gaunt, C Trevor; Herman, RonaldA good understanding of the financial value that electricity customers place on power supply reliability and the underlying factors that give rise to higher and lower values is an essential tool in the designing, planning and operating standards of power system networks. This research study is a first step toward addressing the current absence of consistent data needed to support better estimates of the economic value of power supply reliability. The economic value of power supply reliability is usually measured through power interruption costs faced by electricity customers. The aim of this research study was to develop Customer Interruption Cost (CIC) models for both commercial and industrial customers.
- ItemOpen AccessRisk-based interruption cost index based on customer and interruption parameters(2014) Dzobo, Oliver; Gaunt, C Trevor; Herman, RonaldModern competitive electricity markets do not ask for power systems with the highest possible technical perfection, but for systems with the highest possible economic efficiency. Higher economic efficiency can only be achieved when accurate and flexible analysis tools are used. Thus, the modelling of reliability inputs, methodology applied in assessing supply reliability and the interpretation of the reliability outputs should be carefully considered in power system management. In order to relate investment costs to the resulting levels of supply reliability, it is required that supply reliability be quantified in a monetary way. This can be done by calculating the expected interruption costs. Interruption costs evaluation, however, cannot be done correctly in all cases by methods that are based on the commonly used average values. It is the objective of this thesis to find a new way of calculating interruption costs which would combine the precision of a probabilistic method with the flexibility and correctness of customer and interruption parameters. A new reliability worth index was found, based on customer and interruption parameters. This new index was called a Risk-based interruption cost (RBIC) index and is described in detail in this thesis. The technique applies a time-based probabilistic modelling approach to network reliability worth parameters. The approach uses probability distribution functions to model customer interruption costs (CICs) while taking into account seasonal, day-of-week and time-of-day infl uences. In addition, customer specific parameters - economic activity, energy consumption, turnover and power interruption mitigation measures are used to segment electricity customers into customer cluster segments of similar cost profiles. Unlike the conventional deterministic approach, the new technique thus considers variability in CICs. The new model and methods to calculate the new reliability worth index have been implemented in a computer program and the accuracy of the calculation method was tested in various case studies and by comparison with the traditional average process. This research shows that probability density functions are superior to deterministic average values when modelling reliability worth parameters. Probability distribution functions reflect the variability in reliability worth parameters through their dispersion and skewness. Disregarding the effects of probability distribution of the interruption cost leads to large errors, up to 40% and more, in the calculated expected interruption costs. The actual error in specific reliability worth calculations is hard to estimate. It is however clear that this error cannot be simply ignored. Furthermore, the risk-based approach applied to the interpretation of risk-based interruption cost (RBIC) index significantly influences the perception on the network's reliability performance. The risk-based approach allows the uncertainty allowed in a network planning or iv operation decision to be quantified. Use of the new reliability worth index offer more flexibility in reliability worth assessment and produce more accurate results. It can be used in all areas of power system reliability worth assessment which have always been exclusive domain of the average process.
- ItemOpen AccessUsing probability density functions to analyze the effect of external threats on the reliability of a South African power grid(2014) Edimu, Milton; Gaunt, C Trevor; Herman, RonaldThe implications of reliability based decisions are a vital component of the control and management of power systems. Network planners strive to achieve an optimum level of investments and reliability. Network operators on the other hand aim at mitigating the costs associated with low levels of reliability. Effective decision making requires the management of uncertainties in the process applied. Thus, the modelling of reliability inputs, methodology applied in assessing network reliability and the interpretation of the reliability outputs should be carefully considered in reliability analyses. This thesis applies probability density functions, as opposed to deterministic averages, to model component failures. The probabilistic models are derived from historical failure data that is usually confined to finite ranges. Thus, the Beta distribution which has the unique characteristic of being able to be rescaled to a different finite range is selected. The thesis presents a new reliability evaluation technique that is based on the sequential Monte Carlo simulation. The technique applies a time-dependent probabilistic modelling approach to network reliability parameters. The approach uses the Beta probability density functions to model stochastic network parameters while taking into account seasonal and time-of- day influences. While the modelling approach can be applied to different aspects such as intermittent power supply and system loading, it is applied in this thesis to model the failure and repair rates of network components. Unlike the conventional sequential Monte Carlo methods, the new technique does not require the derivation of an inverse translation function for the probability distribution applied. The conventional Monte Carlo technique simulates the up and down component states when building their chronological cycles. The new technique applied here focuses instead on simulating the down states of component chronological cycles. The simulation determines the number of down states, when they will occur and how long they will last before developing the chronological cycle. Tests performed on a published network show that focussing on the down states significantly improves the computation times of a sequential Monte Carlo simulation. Also, the reliability results of the new sequential Monte Carlo technique are more dependent on the input failure models than on the number of simulation runs or the stopping criterion applied to a simulation and in this respect gives results different from present standard approaches. The thesis also applies the new approach on a real bulk power network. The bulk network is part of the South African power grid. Thus, the network threats considered and the corresponding failure data collected are typical of the real South African conditions. The thesis shows that probability density functions are superior to deterministic average values when modelling reliability parameters. Probability density functions reflect the variability in reliability parameters through their dispersion and skewness. The time-dependent probabilistic approach is applied in both planning and operational reliability analyses. The component failure models developed show that variability in network parameters is different for planning and operational reliability analyses. The thesis shows how the modelling approach is used to translate long-term failure models into operational (short-term) failure models. DigSilent and MATLAB software packages are used to perform network stability and reliability simulations in this thesis. The reliability simulation results of the time-dependent probabilistic approach show that the perception on a network's reliability is significantly impacted on when probability distribution functions that account for the full range of parameter values are applied as inputs. The results also show that the application of the probabilistic models to network components must be considered in the context of either network planning or operation. Furthermore, the risk-based approach applied to the interpretation of reliability indices significantly influences the perception on the network's reliability performance. The risk-based approach allows the uncertainty allowed in a network planning or operation decision to be quantified.
- ItemOpen AccessVoltage calculation on low voltage feeders with distributed generation(2014) Namanya, Emmanuel; Gaunt, C Trevor; Herman, RonaldThe increasing levels of greenhouse gas emission and the continued depletion of fossil fuels have been the driving factors for power utilities to utilize renewable energy sources for power generation. In South Africa, a target was set in 2008 to achieve 10000 GWh of renewable generation by 2013, which includes DG on LV feeders. This has seen the increase in small scale generators, close to load centres in low voltage distribution networks such as solar PV panels in residential houses, to supplement the energy needs of consumers. This has sparked much debate over the impacts, as well as benefits, of increasing the amount of generation on these low voltage (LV) feeders. However, the power utility holds the statutory role to preserve and maintain the quality of supply of electricity and must therefore assess any impact of increasing generation on LV distribution systems. This created the need for a planning tool to assess the impact of increasing DG on LV distribution networks. There has been a lot of work carried out by researchers to assess the impact of DG on the power system, using various indicators like frequency, power losses, current, voltage etc. Keeping the voltage of a DG-integrated feeder system within the pre-defined standards has been a major challenge for power utilities today. In this report, the voltage impact of DG in LV distribution systems is examined and analysed for increasing DG penetration, particularly solar PV panels in residential households. In South Africa, the recommended method for voltage calculation in feeders is the Herman-Beta algorithm, which is used in the design of passive LV feeders. In 2011, Gaunt experimented with modelling DG as negative loads in the HB algorithm to extend the voltage calculation to include the presence of DG on LV feeders. This work identifies and develops a tool(s) to enable power utility planners to analyse the voltage impact of DG on LV feeders. The work in this study adds onto the DG modelling approach, introduced by Gaunt in 2011, to produce an algorithm for voltage calculation in active LV feeders with DG. This involves three major steps. First step involves the thorough testing of the HB algorithm, written in Matlab, for passive LV feeders and validating it against voltage calculation through Monte Carlo Simulation (MCS). The second step involves ammending and extending the HB algorithm for voltage calculation in active LV feeders with DG, testing and validation against voltage calculation through Monte Carlo Simulation (MCS). With the HB algorithm fully tested and validated, the third step involves using the algorithm for voltage analysis of active feeders with increasing DG penetration. The third and final step, analysing the voltage rise constraints of active LV feeders, involves running the HB algorithm, analytical method, in a MCS to create various scenarios on the feeder. Simulations have been performed to assess the voltage impact of increasing DG penetration on LV feeders for various test cases to mimic practical LV feeder conditions. The outcome of this study presented an application tool for the design of active LV feeders, whose output/results are summarized into implications for voltage rise mitigation and providing useful information on the DG hosting capacity of LV feeders. The recommended DG penetration limit for LV feeders in this study has been DG capacity of 30 of the actual ADMD, used to design the passive feeder. It has been shown that after this limit, the feeder should be reinforced to avoid incidents of voltage violations. In addition, the work done in this project has set a foundation upon which a variety of similar studies can be done with active LV feeders such as the effect of solar water heating and the penetration of other DG technologies such as wind.