Browsing by Author "Combrink, James"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- ItemOpen AccessA Sensitivity Analysis of Model Structure in Stochastic Differential Equation and Agent-Based Epidemiological Models(2014) Combrink, JamesThe dynamics of infectious diseases have been modelled by several universally recognised procedures. The most common two modelling methods are differential equation models (DEM) and agent based models (ABM). These models have both been used through the late 20th and early 21st century to gain an understanding of prevalence levels and behaviour of infectious diseases; and subsequently to forecast potential impacts of a treatment. In the case of a life-threatening disease such as Malaria, it is problematic to be working with incorrect predictions and an epidemic may result from a misinformed judgement on the required treatment program. DEM and ABM have been documented to provide juxtapositioned results (and conclusions) in several cases, even whilst fitting identical data sets [Figueredo, et al. 2014]. Under the correct model, one would expect a fair representation of an infectious disease and hence an insightful conclusion. It is hence detrimental for the choice of treatment tactics to be dependent on the choice of model structure. This honours thesis has identified the necessity for caution on the model methodology and performs a sensitivity analysis on the incidence and prevalence of an infectious disease under varying levels of treatment. This thesis hones in on modelling methodology under various structures: the procedure is applicable to any infectious disease, and this thesis provides a case study on Malaria modelling with a later extension into Ebola. Beginning with a simple Susceptible-Infected-Recovered-Susceptible (SIRS) model: immediately obvious differences are examined to give an indication of the point at which the models lose integrity in direct comparability. The SIRS models are built up to include varying levels of exposure, treatment and movement dynamics and examining the nature of the differences in conclusions drawn from separate models.
- ItemOpen AccessRobust portfolio construction: using resampled efficiency in combination with covariance shrinkage(2017) Combrink, James; Bradfield, DavidThe thesis considers the general area of robust portfolio construction. In particular the thesis considers two techniques in this area that aim to improve portfolio construction, and consequently portfolio performance. The first technique focusses on estimation error in the sample covariance (one of portfolio optimisation inputs). In particular shrinkage techniques applied to the sample covariance matrix are considered and the merits thereof are assessed. The second technique considered in the thesis focusses on the portfolio construction/optimisation process itself. Here the thesis adopted the 'resampled efficiency' proposal of Michaud (1989) which utilises Monte Carlo simulation from the sampled distribution to generate a range of resampled efficient frontiers. Thereafter the thesis assesses the merits of combining these two techniques in the portfolio construction process. Portfolios are constructed using a quadratic programming algorithm requiring two inputs: (i) expected returns; and (ii) cross-sectional behaviour and individual risk (the covariance matrix). The output is a set of 'optimal' investment weights, one per each share who's returns were fed into the algorithm. This thesis looks at identifying and removing avoidable risk through a statistical robustification of the algorithms and attempting to improve upon the 'optimal' weights provided by the algorithms. The assessment of performance is done by comparing the out-of-period results with standard optimisation results, which highly sensitive and prone to sampling-error and extreme weightings. The methodology looks at applying various shrinkage techniques onto the historical covariance matrix; and then taking a resampling portfolio optimisation approach using the shrunken matrix. We use Monte-Carlo simulation techniques to replicate sets of statistically equivalent portfolios, find optimal weightings for each; and then through aggregation of these reduce the sensitivity to the historical time-series anomalies. We also consider the trade-off between sampling-error and specification-error of models.