### Browsing by Subject "Mathematical Finance"

Now showing 1 - 20 of 127

###### Results Per Page

###### Sort Options

- ItemOpen AccessA Comparison Between Break-Even Volatility and Deep Hedging For Option Pricing(2022) Claassen, Quintin; Mahomed, ObeidThe Black-Scholes (1973) closed-form option pricing approach is underpinned by numerous well-known assumptions (see (Taleb, 1997, pg.110-111) or (Wilmott, 1998, ch.19)), where much attention has been paid in particular to the assumption of constant volatility, which does not hold in practice (Yalincak, 2012). The standard in industry is to use various volatility estimation and parameterisation techniques when pricing to more closely recover the market-implied volatility skew. One such technique is to use Break-Even Volatility (BEV), the method of retrospectively solving for the volatility which sets the hedging profit and loss at option maturity to zero (conditional on a single, or set of, stock price paths). However, using BEV still means pricing using existing model frameworks (and using the assumptions which come with them). The new paradigm of Deep Hedging (DH) (as explored by Buehler et al. (2019)), ie. using deep neural networks to solve for optimal option prices (and the respective parameters needed to hedge these options at discrete time steps), has allowed the market-maker to go â€˜modelfree', in the sense of being able to price without any prior assumptions about stock price dynamics (which are needed in the traditional closed-form pricing approach). Using simulated stock price data of various model dynamics, we first investigate whether DH is more successful than BEV in recovering the model implied volatility surface. We find both to perform reasonably well for time-homogeneous models, but DH struggles to recover correct results for time in-homogeneous models. Thereafter, we analyse the impact of incorporating risk-aversion for both approaches only for time-homogeneous models. We find both methods to produce pricing results inline with varying risk aversion levels. We note the simple architecture of our DHNN as a potential point of departure for more complex neural networks.
- ItemOpen AccessA review of current Rough Volatility Methods(2021) Beelders, Noah; Soane, AndrewRecent literature has provided empirical evidence showing that the behaviour of volatility in financial markets is rough. Given the complicated nature of rough dynamics, a review of these methods is presented with the intention of ensuring tractability for those wishing to implement these techniques. The models of rough dynamics are built upon the fractional Brownian Motion and its associated powerlaw kernel. One such model is called the Rough Heston, an extension of the Classical Heston model, and is the main model of focus for this dissertation. To implement the Rough Heston, fractional Riccati ordinary differential equations (ODEs) must be solved; and this requires numerical methods. Three such methods in order of increasing complexity are considered. Using the fractional Adam's numerical method, the Rough Heston model can be effected to produce realistic volatility smiles comparable to that of market data. Lastly, a quick and easy approximation of the Rough Heston model, called the Poor Man's Heston, is discussed and implemented.
- ItemOpen AccessA Review of Multilevel Monte Carlo Methods(2020) Jain, Rohin; McWalter, ThomasThe Monte Carlo method (MC) is a common numerical technique used to approximate an expectation that does not have an analytical solution. For certain problems, MC can be inefficient. Many techniques exist to improve the efficiency of MC methods. The Multilevel Monte Carlo (ML) technique developed Giles (2008) is one such method. It relies on approximating the payoff at different levels of accuracy and using a telescoping sum of these approximations to compute the ML estimator. This dissertation summarises the ML technique and its implementation. To start with, the framework is applied to a European call option. Results show that the efficiency of the method is up to 13 times faster than crude MC. Then an American put option is priced within the ML framework using two pricing methods. The Least Squares Monte Carlo method (LSM) estimates an optimal exercise strategy at finitely many instances, and consequently a lower bound price for the option. The dual method finds an optimal martingale, and consequently an upper bound for the price. Although the pricing results are quite close to the corresponding crude MC method, the efficiency produces mixed results. The LSM method performs poorly within an ML framework, while the dual approach is enhanced.
- ItemOpen AccessAccelerated Adjoint Algorithmic Differentiation with Applications in Finance(2017) De Beer, Jarred; Ouwehand, Peter; Kuttel, Michelle MaryAdjoint Differentiation's (AD) ability to calculate Greeks efficiently and to machine precision while scaling in constant time to the number of input variables is attractive for calibration and hedging where frequent calculations are required. Algorithmic adjoint differentiation tools automatically generates derivative code and provide interesting challenges in both Computer Science and Mathematics. In this dissertation we focus on a manual implementation with particular emphasis on parallel processing using Graphics Processing Units (GPUs) to accelerate run times. Adjoint differentiation is applied to a Call on Max rainbow option with 3 underlying assets in a Monte Carlo environment. Assets are driven by the Heston stochastic volatility model and implemented using the Milstein discretisation scheme with truncation. The price is calculated along with Deltas and Vegas for each asset, at a total of 6 sensitivities. The application achieves favourable levels of parallelism on all three dimensions implemented by the GPU: Instruction Level Parallelism (ILP), Thread level parallelism (TLP), and Single Instruction Multiple Data (SIMD). We estimate the forward pass of the Milstein discretisation contains an ILP of 3.57 which is between the average range of 2-4. Monte Carlo simulations are embarrassingly parallel and are capable of achieving a high level of concurrency. However, in this context a single kernel running at low occupancy can perform better with a combination of Shared memory, vectorized data structures and a high register count per thread. Run time on the Intel Xeon CPU with 501 760 paths and 360 time steps takes 48.801 seconds. The GT950 Maxwell GPU completed in 0.115 seconds, achieving an 422â‡¥ speedup and a throughput of 13 million paths per second. The K40 is capable of achieving better performance.
- ItemOpen AccessAccounting for roll-over risk in the pricing of caps and floors(2022) Vidima, Sizwe; Backwell, AlexThe peak of the global financial crisis necessitated practitioners to rethink the single curve approach to pricing interest-rate derivatives. This was as a result of a violation in spot-forward parity relationships thereby prompting markets to realise the presence of a new type of risk and subsequently the need for a multi-curve pricing framework. The roll-over risk framework is one that accounts for liquidity constraints and default risk thereby providing a cogent explanation for the spotforward parity violation that led to the need for multiple curves. The primary objective of this work is to price XIBOR-based caps and floors under a framework which accounts for roll-over risk. This reformulation of interest-rate derivatives is achieved using Fourier Transform methods as well as Monte Carlo simulations for comparison. We found that the results obtained using the two approaches were comparable even though the two methods are different in nature. This agreement in prices is compelling evidence that the computations are correct.
- ItemOpen AccessAdjoint Venture: Fast Greeks with Adjoint Algorithmic Differentiation(2017) McPetrie, Christopher Lindsay; McWalter, ThomasThis dissertation seeks to discuss the adjoint approach to solving affine recursion problems (ARPs) in the context of computing sensitivities of financial instruments. It is shown how, by moving from an intuitive 'forward' approach to solving a recursion to an 'adjoint' approach, one might dramatically increase the computational efficiency of algorithms employed to compute sensitivities via the pathwise derivatives approach in a Monte Carlo setting. Examples are illustrated within the context of the Libor Market Model. Furthermore, these ideas are extended to the paradigm of Adjoint Algorithmic Differentiation, and it is illustrated how the use of sophisticated techniques within this space can further improve the ease of use and efficiency of sensitivity calculations.
- ItemOpen AccessAlternative distributions in the Black-Litterman model of asset allocation(2011) Mbofana, Stewart; Becker, RonaldIn this thesis we replace the normal distribution assumption in the calculation of the prior equilibrium returns used in the model with a more general distribution which captures the skewness and fat tails exhibited by stock data. We consider the Ã¡ stable distributions as an alternative distribution to the normal distribution. Consequently we also consider alternative measures of risk, the Value at Risk and the Conditional Value at Risk other than the variance used in the normal case.
- ItemOpen AccessAn Application of Deep Hedging in Pricing and Hedging Caplets on the Prime Lending Rate(2022) Patel, Keyur; Mahomed, ObeidDerivatives in South Africa are traded via an exchange, such as the JSE's derivatives markets, or over-the-counter (OTC). This dissertation focuses on the pricing and hedging of caplets written on the South African prime lending rate. In a complete market, caplets can be continuously hedged with zero risk. However, in the particular case of caplets written on the prime lending rate, market completeness ceases to exist. This is because the prime lending rate is a benchmark for retail lending and is not tradeable, in general. Since parametric models may not be specified and calibrated for such incomplete markets, the aim of this dissertation is to consider the deep hedging approach of Buehler et al. (2019) for pricing and hedging such a derivative. First, a model dependent approach is taken to set a benchmark level of performance. This approach is derived using techniques outlined in West (2008) which rely heavily on interest rate pairs being cointegrated to use the market standard Black (1976) model. Thereafter, the deep hedging approach is considered in which a neural network is set up and used to price and hedge the caplets. The deep hedging approach performs at least as well as the model dependent approach. Furthermore, the deep hedging approach can also be used to recover a volatility skew which is in fact, needed as an input in the model dependent approach. The approach has certain downsides to it: a rich set of historical data is required and it is more time consuming to conduct than the model dependent approach. The deep hedging approach, in this specific implementation, also has a limitation that only one hedge instrument is used. When this limitation is also applied to the model dependent approach, the deep hedging approach performs better in all cases. Therefore, deep hedging proves to be a sufficient alternative to pricing and hedging caplets on the prime lending rate in an incomplete market setting.
- ItemOpen AccessAn introduction to interest rate jumps at deterministic times(2022) Bastick, Kirk; Backwell, AlexThe observation of jumps in empirical interest-rate data has prompted the inclusion of these jumps in recent term-structure models. This dissertation focusses on explaining the effects of jumps that occur at known times on the pricing of bonds. Filipovic (2009) affirms that the transition from the physical measure to the riskneutral measure is key to the pricing of bonds and other financial instruments. Jumps in the interest rate at known times add a layer of complexity to this measurechange process. A simplified version of the term-structure model proposed by Kim and Wright (2014) is employed to analyse the effect of the jumps on the one-year point on the yield curve. Jumps at deterministic times are found to have a material effect on the one-year yield with an increasing effect as time approaches a deterministic jump date.
- ItemOpen AccessAnalysis of CDO tranche valuation and the 2008 credit crisis(2013) Muzenda, Nevison; Becker, RonaldThe causes of the 2008 financial crisis were wide ranging. Some financial commentators have suggested there were significant inadequacies in the models used to price complex derivatives such as synthetic Collaterilised Debt Obligations (CDOs). We discuss the technical properties of CDOs and the modeling approaches used by CDO traders and the watchdog credit rating agencies. We look at how the pricing models fared before and during the financial crisis. Comparing our model prices to market synthetic CDO prices, we investigate how well these pricing models captured the underlying financial risks of trading in CDOs.
- ItemOpen AccessAnalysis of equity and interest rate returns in South Africa under the context of jump diffusion processes(2015) Mongwe, Wilson TsakaneOver the last few decades, there has been vast interest in the modelling of asset returns using jump diffusion processes. This was in part as a result of the realisation that the standard diffusion processes, which do not allow for jumps, were not able to capture the stylized facts that return distributions are leptokurtic and have heavy tails. Although jump diffusion models have been identified as being useful to capture these stylized facts, there has not been consensus as to how these jump diffusion models should be calibrated. This dissertation tackles this calibration issue by considering the basic jump diffusion model of Merton (197G) applied to South African equity and interest rate market data. As there is little access to frequently updated volatility surfaces and option price data in South Africa, the calibration methods that are used in this dissertation are those that require historical returns data only. The methods used are the standard Maximum Likelihood Estimation (MLE) approach, the likelihood profiling method of Honore (1998), the Method of Moments Estimation (MME) technique and the Expectation Maximisation (EM) algorithm. The calibration methods are applied to both simulated and empirical returns data. The simulation and empirical studies show that the standard MLE approach sometimes produces estimators which are not reliable as they are biased and have wide confidence intervals. This is because the likelihood function required for the implementation of the MLE method is not bounded. In the simulation studies, the MME approach produces results which do not make statistical sense, such as negative variances, and is thus not used in the empirical analysis. The best method for calibrating the jump diffusion model to the empirical data is chosen by comparing the width of the bootstrap confidence intervals of the estimators produced by the methods. The empirical analysis indicates that the best method for calibrating equity returns is the EM approach and the best method for calibrating interest rate returns is the likelihood profiling method of Honore (1998).
- ItemOpen AccessAnalytical Solution of the Characteristic Function in the Trolle-Schwartz Model(2019) Van Gysen, Richard John; McWalter, Thomas; Kienitz, JoergIn 2009, Trolle and Schwartz (2008) produced an instantaneous forward interest rate model with several stylised facts such as stochastic volatility. They derived pricing formulae in order to price bonds and bond options, which can be altered to price interest rate options such as caplets, caps and swaptions. These formulae involve implementing numerical methods for solving an ordinary differential equation (ODE). Schumann (2016) confirmed the accuracy of the pricing formulae in the Trolle and Schwartz (2008) model using Monte-Carlo simulation. Both authors used a numerical ODE solver to estimate the ODE. In this dissertation, a closed-form solution for this ODE is presented. Two solutions were found. However, these solutions rely on a simplification of the instantaneous volatility function originally proposed in the Trolle and Schwartz (2008) model. This case happens to be the stochastic volatility version of the Hull and White (1990) model. The two solutions are compared to an ODE solver for one stochastic volatility term and then extended to three stochastic volatility terms.
- ItemOpen AccessApplication of Adjoint Differentiation (AD) for Calculating Libor Market Model Sensitivities(2018) Morley, Niall; McWalter, TomThis dissertation explores a key challenge of the financial industry â€” the efficient computation of sensitivities of financial instruments. The adjoint approach to solving affine recursion problems (ARPs) is presented as a solution to this challenge. A Monte Carlo setting is adopted and it is illustrated how computational efficiency in sensitivity calculation may be significantly improved via the pathwise derivatives method through adapting an adjoint approach. This is achieved through the reversal of the order of differentiation in the pathwise derivatives algorithm in comparison to the standard, intuitive â€˜forwardâ€™ approach. The Libor market model (LMM) framework is selected for examples to demonstrate these computational savings, with varying degrees of complexity of the LMM explored, from a one-factor model with constant volatility to a full factor model with time homogeneous volatilities.
- ItemOpen AccessApplication of Effective Markovian Projection to SABR and Heston Models(2023) Bagraim, Jacques; Ouwehand, Peter; Mc Walter ThomasModel flexibility is often at odds with tractable pricing, and models with tractable pricing often lack flexibility. This poses an issue when calibrating a model to market data where tractability and flexibility are both required. We investigate an approach that allows one model to be projected onto another, potentially allowing for a flexible model to be represented by a tractable one. Here, Effective Markovian Projection is used to obtain equivalent Heston model parameters from a range of SABR models with different skew parameters using two distinct point-matching algorithms. The implied parameters are used to price European claims under a variety of schemes in order to outline the efficacy in this context. We see that this technique is accurate when the underlying probability densities of both models match closely, i.e., when the SABR skew parameter approaches unity, as is seen by comparing prices of claims using Classic Markovian Projection where the underlying SABR processes share the same density. PDE and perturbation SABR prices match closely while Heston characteristic function prices become unstable at lower skew parameters and far in-the-money and out-the-money values of the strike. Lastly, a potential improvement to this application involving error-correction terms is proposed for further application.
- ItemOpen AccessApplication of Volatility Targeting Strategies within a Black-Scholes Framework(2019) Vakaloudis, Dmitri; Mahomed, ObeidThe traditional Black-Scholes (BS) model relies heavily on the assumption that underlying returns are normally distributed. In reality however there is a large amount of evidence to suggest that this assumption is weak and that actual return distributions are non-Gaussian. This dissertation looks at algorithmically generating a Volatility Targeting Strategy (VTS) which can be used as an underlying asset. The rationale here is that since the VTS has a constant prespecified level of volatility, its returns should be normally distributed, thus tending closer to an underlying that adheres to the assumptions of BS.
- ItemOpen AccessApplications of Gaussian Process Regression to the Pricing and Hedging of Exotic Derivatives(2021) Muchabaiwa, Tinotenda Munashe; Ouwehand, PeterTraditional option pricing methods like Monte Carlo simulation can be time consuming when pricing and hedging exotic options under stochastic volatility models like the Heston model. The purpose of this research is to apply the Gaussian Process Regression (GPR) method to the pricing and hedging of exotic options under the Black-Scholes and Heston model. GPR is a supervised machine learning technique which makes use of a training set to train an algorithm so that it makes predictions. The training set is composed of the input vector X which is a n Ã— p matrix and Y an nÃ—1 vector of targets, where n is the number of training input vectors and p is the number of inputs. Using a GPR with a squared-exponential kernel tuned by maximising the log-likelihood, we established that this GPR works reasonably for pricing Barrier options and Asian options under the Heston model. As compared to the traditional method of Monte Carlo simulation, GPR technique is 2 000 times faster when pricing barrier option portfolios of 100 assets and 1 000 times faster computing a portfolio of Asian options. However, the squared-exponential GPR does not compute reliable hedging ratios under Heston model, the delta is reasonably accurate, but the vega is off.
- ItemOpen AccessApproximating the Heston-Hull-White Model(2019) Patel, Riaz; Rudd, RalphThe hybrid Heston-Hull-White (HHW) model combines the Heston (1993) stochastic volatility and Hull and White (1990) short rate models. Compared to stochastic volatility models, hybrid models improve upon the pricing and hedging of longdated options and equity-interest rate hybrid claims. When the Heston and HullWhite components are uncorrelated, an exact characteristic function for the HHW model can be derived. In contrast, when the components are correlated, the more useful case for the pricing of hybrid claims, an exact characteristic function cannot be obtained. Grzelak and Oosterlee (2011) developed two approximations for this correlated case, such that the characteristics functions are available. Within this dissertation, the approximations, referred to as the determinist and stochastic approximations, were implemented to price vanilla options. This involved extending the Carr and Madan (1999) method to a stochastic interest rate setting. The approximations were then assessed for accuracy and efficiency. In determining an appropriate benchmark for assessing the accuracy of the approximations, the full truncation Milstein and Quadratic Exponential (QE) schemes, which are popular Monte Carlo discretisation schemes for the Heston model, were extended to the HHW model. These schemes were then compared against the characteristic function for the uncorrelated case, and the QE scheme was found to be more accurate than the Milstein-based scheme. With the differences in performance becoming increasingly noticeable when the Feller (1951) condition was not satisfied and the maturity and volatility of the Hull-White model (âŒ˜) was large. In assessing the accuracy of the approximations against the QE scheme, both approximations were similarly accurate when âŒ˜ was small. In contrast, when âŒ˜ was large, the stochastic approximation was more accurate than the deterministic approximation. However, the deterministic approximation was significantly faster than the stochastic approximation and the stochastic approximation displayed signs of potential instability. When âŒ˜ is small, the deterministic approximation is therefore recommended for use in applications such as calibration. With its shortcomings, the stochastic approximation could not be recommended. However, it did show promising signs of accuracy that warrants further investigation into its efficiency and stability.
- ItemOpen AccessAn assessment of the application of cluster analysis techniques to the Johannesburg Stock Exchange(2014) Tully, RobynCluster analysis is becoming an increasingly popular method in modern finance because of its ability to summarise large amounts of data and so help individual and institutional investors to make timeous and informed investment decisions. This is no less true for investors in smaller, emerging markets - such as the Johannesburg Stock Exchange - than it is for those in the larger global markets. This study examines the application of two clustering techniques to the Johannesburg Stock Exchange. First, the application of Salvador and Chan's (2003) L method stopping rule to a hierarchical clustering of time series return data was analysed as a method for determining the number of latent groups in the data set. Using Ward's method and the Euclidean distance function, this method appears to be able detect the correct number of clusters on the JSE. Second, the ability of three different clustering algorithms to generate consistent clusters and cluster members over time on the Johannesburg Stock Exchange was analysed. The variation of information was used to measure the consistency of cluster members through time. Hierarchical clustering using Ward's method and the Euclidean distance measure proved to produce the most consistent results, while the K-means algorithms generated the least consistent cluster members.
- ItemOpen AccessAsymptotics of the Rough Heston Model(2021) Hayes, Joshua J; Ouwehand, PeterThe recent explosion of work on rough volatility and fractional Brownian motion has led to the development of a new generation of stochastic volatility models. Such models are able to capture a wide range of stylised facts that classical models simply do not. While these models have sound mathematical underpinnings, they are difficult to implement, largely due to the fact that fractional Brownian motion is neither Markovian nor a semimartingale. One idea is to investigate the behaviour of these models as maturities become very small (or very large) and consider asymptotic estimates for quantities of interest. Here we investigate the performance of small-time asymptotic formulae for the cumulant generating function of the Fractional Heston model as presented in Guennoun et al. (2018). These formulae and their effectiveness for small-time pricing are interrogated and compared against the Rough Heston model proposed in El Euch and Rosenbaum (2019).
- ItemOpen AccessBias-Free Joint Simulation of Multi-Factor Short Rate Models and Discount Factor(2018) Lopes, Marcio Ferrao; McWalter,Tom; Kienitz, JorgThis dissertation explores the use of single- and multi-factor Gaussian short rate models for the valuation of interest rate sensitive European options. Specifically, the focus is on deriving the joint distribution of the short rate and the discount factor, so that an exact and unbiased simulation scheme can be derived for risk-neutral valuation. We see that the derivation of the joint distribution remains tractable when working with the class of Gaussian short rate models. The dissertation compares three joint and exact simulation schemes for the short rate and the discount factor in the single-factor case; and two schemes in the multifactor case. We price European floor options and European swaptions using a twofactor Gaussian short rate model and explore the use of variance reduction techniques. We compare the exact and unbiased schemes to other solutions available in the literature: simulating the short rate under the forward measure and approximating the discount factor using quadrature.