### Browsing by Author "Mavuso, Melusi"

Now showing 1 - 3 of 3

###### Results Per Page

###### Sort Options

- ItemOpen AccessEstimating stochastic volatility models with student-t distributed errors(2020) Rama, Vishal; Kulikova, Maria; Mavuso, MelusiThis dissertation aims to extend on the idea of Bollerslev (1987), estimating ARCH models with Student-t distributed errors, to estimating Stochastic Volatility (SV) models with Student-t distributed errors. It is unclear whether Gaussian distributed errors sufficiently account for the observed leptokurtosis in financial time series and hence the extension to examine Student-t distributed errors for these models. The quasi-maximum likelihood estimation approach introduced by Harvey (1989) and the conventional Kalman filter technique are described so that the SV model with Gaussian distributed errors and SV model with Student-t distributed errors can be estimated. Estimation of GARCH (1,1) models is also described using the method maximum likelihood. The empirical study estimated four models using data on four different share return series and one index return, namely: Anglo American, BHP, FirstRand, Standard Bank Group and JSE Top 40 index. The GARCH and SV model with Student-t distributed errors both perform best on the series examined in this dissertation. The metric used to determine the best performing model was the Akaike information criterion (AIC).
- ItemOpen AccessNeural network libor market model for pricing and hedging interest rate derivatives(2022) Robbertze, Yuri; Mavuso, MelusiIn this dissertation, we will introduce a new formulation of variational auto-encoders in order to generate the data we require. Our variational auto-encoder is based on data generation principles from elementary probability i.e. finding the inverse cumulative distribution function and using uniform inputs to generate samples from the distribution. Like all autoencoders, the goal is to reduce the dimensionality in the kernel and use this to describe the data features in the generation. Our formulation will use a kernel which transforms the outputs of the encoder into multi-dimensional uniformly distributed variables, which in turn will learn the cumulative distribution function (in the case of a one dimensional latent space) or the relationship of variables to copula input uniforms (in the case of a multi-dimensional latent space). The decoder will then train to learn the inverse of the encoder and this will then be used to generate data.
- ItemOpen AccessQuantifying the Model Risk Inherent in the Calibration and Recalibration of Option Pricing Models(2021-01-04) Feng, Yu; Rudd, Ralph; Baker, Christopher; Mashalaba, Qaphela; Mavuso, Melusi; Schlögl, ErikWe focus on two particular aspects of model risk: the inability of a chosen model to fit observed market prices at a given point in time (calibration error) and the model risk due to the recalibration of model parameters (in contradiction to the model assumptions). In this context, we use relative entropy as a pre-metric in order to quantify these two sources of model risk in a common framework, and consider the trade-offs between them when choosing a model and the frequency with which to recalibrate to the market. We illustrate this approach by applying it to the seminal Black/Scholes model and its extension to stochastic volatility, while using option data for Apple (AAPL) and Google (GOOG). We find that recalibrating a model more frequently simply shifts model risk from one type to another, without any substantial reduction of aggregate model risk. Furthermore, moving to a more complicated stochastic model is seen to be counterproductive if one requires a high degree of robustness, for example, as quantified by a 99% quantile of aggregate model risk.