Browsing by Author "Laubscher, Ryno"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
- ItemOpen AccessA physics-informed neural network modelling methodology to analyse integrated thermofluid systems(2024) Laugksch, Kristina Karli; Rousseau, Pieter; Laubscher, RynoPhysics-informed neural networks (PINNs) were developed to overcome the limitations of acquiring large training datasets that are commonly encountered when using purely data-driven machine learning methods. This study explores a PINN modelling methodology to analyse steady-state integrated thermofluid systems based on the mass, energy, and momentum balance equations, combined with the relevant component characteristics and fluid property relationships. The PINN methodology is applied to three thermofluid systems that encapsulate important phenomena typically encountered in integrated thermofluid systems modelling, namely: (i) a heat exchanger network with two different fluid streams and components linked in series and parallel; (ii) a recuperated closed Brayton cycle containing various turbomachines and heat exchangers, and (iii) a simplified boiler consisting of a furnace and radiative convective superheater. The predictions of the three PINN models were compared to benchmark solutions generated via conventional, physics-based thermofluid process models. The largest average relative error across all three models is only 0.93%, indicating that the PINN methodology can successfully be implemented to generate accurate solutions using the non-dimensionalised forms of the balance equations. Furthermore, it was shown that the trained PINN models provided a significant increase in inference speed compared to the conventional process models. The PINN modelling methodology was then extended to develop a surrogate model for the heat exchanger network. An additional surrogate model was developed for comparison using a data-driven multilayer perceptron (MLP) neural network. The MLP surrogate model was able to interpolate accurately. However, its predictive performance declined when making predictions for samples that fell outside of the range of training data. Despite various refinements, the PINN surrogate model could only be trained successfully for datasets that contained up to five unique samples. This limitation could not be resolved within the scope of the present study and should be investigated further. The accuracy of the PINN surrogate model degraded significantly when used to extrapolate beyond the training envelope. Due to the constraint on the number of training samples, it is impossible to draw a general conclusion regarding the extrapolation ability of the PINN concept. In spite of its current limitations, the significant increase in computational speed offered by the PINN modelling methodology when used to analyse integrated thermofluid systems suggests that this is a promising modelling technique that should be explored further.
- ItemOpen AccessApplication of probabilistic deep learning models to simulate thermal power plant processes(2022) Raidoo, Renita Anand; Laubscher, RynoDeep learning has gained traction in thermal engineering due to its applications to process simulations, the deeper insights it can provide and its abilities to circumvent the shortcomings of classic thermodynamic simulation approaches by capturing complex inter-dependencies. This works sets out to apply probabilistic deep learning to power plant operations using historic plant data. The first study presented, entails the development of a steady-state mixture density network (MDN) capable of predicting effective heat transfer coefficients (HTC) for the various heat exchanger components inside a utility scale boiler. Selected directly controllable input features, including the excess air ratio, steam temperatures, flow rates and pressures are used to predict the HTCs. In the second case study, an encoder-decoder mixturedensity network (MDN) is developed using recurrent neural networks (RNN) for the prediction of utility-scale air-cooled condenser (ACC) backpressure. The effects of ambient conditions and plant operating parameters, such as extraction flow rate, on ACC performance is investigated. In both case studies, hyperparameter searches are done to determine the best performing architectures for these models. Comparisons are drawn between the MDN model versus standard model architecture in both case studies. The HTC predictor model achieved 90% accuracy which equates to an average error of 4.89 W m2K across all heat exchangers. The resultant time-series ACC model achieved an average error of 3.14 kPa, which translate into a model accuracy of 82%.
- ItemOpen AccessDevelopment of a process modelling methodology and condition monitoring platform for air-cooled condensers(2021) Haffejee, Rashid Ahmed; Laubscher, RynoAir-cooled condensers (ACCs) are a type of dry-cooling technology that has seen an increase in implementation globally, particularly in the power generation industry, due to its low water consumption. Unfortunately, ACC performance is susceptible to changing ambient conditions, such as dry bulb temperatures, wind direction, and wind speeds. This can result in performance reduction under adverse ambient conditions, which leads to increased turbine back pressures and in turn, a decrease in generated electricity. Therefore, this creates a demand to monitor and predict ACC performance under changing ambient conditions. This study focuses on modelling a utility-scale ACC system at steady-state conditions applying a 1-D network modelling approach and using a component-level discretization approach. This approach allowed for each cell to be modelled individually, accounting for steam duct supply behaviour, and for off-design conditions to be investigated. The developed methodology was based on existing empirical correlations for condenser cells and adapted to model double-row dephlegmators. A utility-scale 64-cell ACC system based in South Africa was selected for this study. The thermofluid network model was validated using site data with agreement in results within 1%; however, due to a lack of site data, the model was not validated for off-design conditions. The thermofluid network model was also compared to the existing lumped approach and differences were observed due to the steam ducting distribution. The effect of increasing ambient air temperature from 25 35 − C C was investigated, with a heat rejection rate decrease of 10.9 MW and a backpressure increase of 7.79 kPa across the temperature range. Condensers' heat rejection rate decreased with higher air temperatures, while dephlegmators' heat rejection rate increased due to the increased outlet vapour pressure and flow rates from condensers. Off-design conditions were simulated, including hot air recirculation and wind effects. For wind effects, the developed model predicted a decrease in heat rejection rate of 1.7 MW for higher wind speeds, while the lumped approach predicted an increase of 4.9 . MW For practicality, a data-driven surrogate model was developed through machine learning techniques using data generated by the thermofluid network model. The surrogate model predicted systemlevel ACC performance indicators such as turbine backpressure and total heat rejection rate. Multilayer perceptron neural networks were developed in the form of a regression network and binary classifier network. For the test sets, the regression network had an average relative error of 0.3%, while the binary classifier had a 99.85% classification accuracy. The surrogate model was validated to site data over a 3 week operating period, with 93.5% of backpressure predictions within 6% of site data backpressures. The surrogate model was deployed through a web-application prototype which included a forecasting tool to predict ACC performance based on a weather forecast.
- ItemOpen AccessHeat Transfer Analysis Using Thermofluid Network Models for Industrial Biomass and Utility Scale Coal-Fired Boilers(2023-02-09) Rousseau, Pieter; Laubscher, Ryno; Rawlins, Brad TravisIntegrated whole-boiler process models are useful in the design of biomass and coal-fired boilers, and they can also be used to analyse different scenarios such as low load operation and alternate fuel firing. Whereas CFD models are typically applied to analyse the detail heat transfer phenomena in furnaces, analysis of the integrated whole-boiler performance requires one-dimensional thermofluid network models. These incorporate zero-dimensional furnace models combined with the solution of the fundamental mass, energy, and momentum balance equations for the different heat exchangers and fluid streams. This approach is not new, and there is a large amount of information available in textbooks and technical papers. However, the information is fragmented and incomplete and therefore difficult to follow and apply. The aim of this review paper is therefore to: (i) provide a review of recent literature to show how the different approaches to boiler modelling have been applied; (ii) to provide a review and clear description of the thermofluid network modelling methodology, including the simplifying assumptions and its implications; and (iii) to demonstrate the methodology by applying it to two case study boilers with different geometries, firing systems and fuels at various loads, and comparing the results to site measurements, which highlight important aspects of the methodology. The model results compare well with values obtained from site measurements and detail CFD models for full load and part load operation. The results show the importance of utilising the high particle load model for the effective emissivity and absorptivity of the flue gas and particle suspension rather than the standard model, especially in the case of a high ash fuel. It also shows that the projected method provides better results than the direct method for the furnace water wall heat transfer.
- ItemOpen AccessIntegrated network-based thermofluid model of a once-through boiler at full- and part-load(2022) Feng, Kai-Yu; Rousseau, Pieter; Laubscher, RynoThe increased penetration of renewable energy sources in South Africa requires greater operational flexibility of existing coal-fired power plants (CFPPs). Operational flexibility implies that power plants need to operate intermittently or at low load for extended periods. Existing CFPPs are designed to operate at a steady baseload. Operating at these off-design conditions increase the risk of damaging the boiler's thick-walled components, leading to reduced life expectancy and/or failure. Given that extensive experimental investigations on operating plants are impractical due to the risks, costs and complexity involved, there is a need for an integrated boiler model that has the necessary detail to study off-design and low load operations of coal-fired power plants. For that reason, a 1D quasi-steady-state thermofluid network model of a tower type once-through boiler was developed using the Flownex simulation environment. The furnace model assumes complete, infinitely fast combustion with a specified value of unburned carbon and excess air. The radiation heat transfer in the furnace is modelled using the projected area approach (Gurvich/Blokh model) together with a high ash loading model. The gas-to-steam tube bank heat exchangers are discretised pass-by-pass, and the complex heat transfer phenomena in the heat exchangers and membrane water walls are represented by equivalent thermal networks. The model results for the as-designed cases show that the ash deposition resistances suggested in the literature are not applicable for the case study boiler. For that reason, the proposed model calibration methodology was therefore applied at full-load operation (100%), and the results show good accuracy compared to real-plant data. The average error of the predicted heat exchanger heat loads is 2.0% and the maximum error is 5.2%. The calibrated model was then validated by applying it to two part-load operational states in dry mode operation, as well as a wet mode (low-load) operational state. For the 81% load case, the average error in the heat exchanger duty is 2.0% and the maximum is 4.9%, while for the 63% load case, the average error is 3.6% and the maximum is 9.7%. For the low-load wet mode case at 35% load, the average error is 10.4% and the maximum is 18.1%. The cumulative heat transfer results for all the load cases correspond closely to the measured data, with the maximum error being 0.83% for the low load case. These results suggest that the calibrated model can capture the heat distribution in the boiler with sufficient accuracy to allow suitable ash deposition resistances to be obtained from the calibration process. Furthermore, the metal temperatures predicted by the model are also shown to be sufficiently accurate, which means that it can be used to identify the heat exchanger tube passes or membrane water walls that may be at risk during operation.
- ItemOpen AccessPower Station Thermal Efficiency Performance Method Evaluation(2021) Heerlall, Heeran; Laubscher, RynoDue to global warming, there is an escalated need to move towards cleaner energy solutions. Almost 85% of South Africa's electric energy is provided via Eskom's conventional coal-fired power plants. Globally, coal-fired power plants have a significant share in the power generation energy mix and this will be the case over the next 20 years. A study, aligned with the aspiration of improving the thermal efficiency of the coal-fired power plants, was initiated, with a focus on the accuracy of energy accounting. The goal is that: if we can accurately quantify efficiency losses, the effort can be prioritized to resolve the inefficiencies. Eskom's thermal accounting tool, the STEP model, was reviewed against relevant industry standards (BS 2885, BS EN 12952-15, IEC 60953-0/Ed1) to evaluate the model uncertainty for losses computed via standard correlations. Relatively large deviations were noted for the boiler radiation, turbine deterioration and make-up water losses. A specific review of OEM (Original Equipment Manufacturer) heat rate correction curves was carried out for the determination of turbine plant losses, where these curves were suspected to have high uncertainty, especially when extrapolated to points of significant deviation from design values. For an evaluated case study, the final feed water correction curves were adjusted based on an analysis done with the use of power plant thermodynamic modelling tools namely: EtaPro Virtual Plant® and Steam Pro®. A Python® based computer model was developed to separately propagate systematic (instrument) and combined uncertainties (including temporal) through the STEP model using a numerical technique called sequential perturbation. The study revealed that the uncertainties associated with thermal efficiency, heat rate and individual thermal losses are very specific to the state of operations, as demonstrated by individual unit performance and the power plant's specific design baseline performance curves. Whilst the uncertainties cannot be generalized, a methodology has been developed to evaluate any case. A 3600 MWe wet-cooled power plant (6 x 600 MWe units) situated in Mpumalanga was selected to study the impact of uncertainties on the STEP model outputs. The results from the case study yielded that the thermal efficiency computed by the “direct method”, had an instrument uncertainty of 0.756% absolute (abs) versus the indirect method of 0.201% abs when computed at the station level for a 95% confidence interval. For an individual unit, the indirect efficiency uncertainty was as high as 0.581% abs. A study was conducted to find an optimal resolution (segment size) for the thermal performance metrics to be computed, by discretizing the monthly data into smaller segment sizes and studying the movement of the mean STEP model outputs and the temporal uncertainty. It was found that the 3-hour segment size is optimal as it gives the maximum movement of the mean of performance metrics without resulting in large temporal uncertainties. When considering the combined uncertainty (temporal and instrument uncertainty) at a data resolution of 1 minute and segment size of 3 hours, the “direct method”, had a combined thermal efficiency uncertainty of 0.768% abs versus the indirect method of 0.218% abs when computed at the station level for a 95% confidence interval. This would mean that the temporal uncertainty contribution to the combined uncertainty is 2.915% for the “direct method” and 14.919% for the “indirect method” of the above-stated uncertainties. The term “STEP Factor” can be used synonymously with effectiveness (percentage of the actual efficiency relative to the target efficiency). For the case evaluated, the mean “indirect method” STEP Factor at the station level moved from 86.698% (using monthly aggregated process data) to 86.135% (when discretized to 3-hour segments) which is roughly a 0.189% abs change in the station's thermal efficiency. This would appear fairly small on the station's overall efficiency but had a significant impact on the evaluation of the STEP Factor losses and the cost impact by the change in the plant efficiency, e.g. the final feed water STEP Factor loss at a unit level moved from 2.6% abs to 3.5% abs which is significant for diagnostic and business case motivations. Later the discrepancy between the direct STEP Factor and indirect STEP Factor were investigated as the uncertainty bands did not overlap as expected. The re-evaluation of the baseline component performance data resulted in the final feed water and the condenser back-pressure heat rate correction curves being adjusted. The exercise revealed that there could be potentially be significant baseline performance data uncertainty. The corrected indirect STEP Factor instrument uncertainty was now found to be 0.468% abs which translates to 0.164% abs overall efficiency. The combined uncertainty was corrected to 0.485% abs at a 3-hour segment size which translates to 0.171% abs overall efficiency. It has been deduced that the figures stated above are case-specific. However, the models have been developed to analyse any coal-fired power plant at various operating conditions. Furthermore, the uncertainty propagation module can be used to propagate uncertainty through any other discontinuous function or computer model. Various recommendations have been made to improve: the model uncertainty of STEP, data acquisition, systematic uncertainty, temporal uncertainty and baseline data uncertainty.