### Browsing by Author "Claassen, Quintin"

Now showing 1 - 1 of 1

###### Results Per Page

###### Sort Options

- ItemOpen AccessA Comparison Between Break-Even Volatility and Deep Hedging For Option Pricing(2022) Claassen, Quintin; Mahomed, ObeidThe Black-Scholes (1973) closed-form option pricing approach is underpinned by numerous well-known assumptions (see (Taleb, 1997, pg.110-111) or (Wilmott, 1998, ch.19)), where much attention has been paid in particular to the assumption of constant volatility, which does not hold in practice (Yalincak, 2012). The standard in industry is to use various volatility estimation and parameterisation techniques when pricing to more closely recover the market-implied volatility skew. One such technique is to use Break-Even Volatility (BEV), the method of retrospectively solving for the volatility which sets the hedging profit and loss at option maturity to zero (conditional on a single, or set of, stock price paths). However, using BEV still means pricing using existing model frameworks (and using the assumptions which come with them). The new paradigm of Deep Hedging (DH) (as explored by Buehler et al. (2019)), ie. using deep neural networks to solve for optimal option prices (and the respective parameters needed to hedge these options at discrete time steps), has allowed the market-maker to go â€˜modelfree', in the sense of being able to price without any prior assumptions about stock price dynamics (which are needed in the traditional closed-form pricing approach). Using simulated stock price data of various model dynamics, we first investigate whether DH is more successful than BEV in recovering the model implied volatility surface. We find both to perform reasonably well for time-homogeneous models, but DH struggles to recover correct results for time in-homogeneous models. Thereafter, we analyse the impact of incorporating risk-aversion for both approaches only for time-homogeneous models. We find both methods to produce pricing results inline with varying risk aversion levels. We note the simple architecture of our DHNN as a potential point of departure for more complex neural networks.