9 C
London
Tuesday, 14 January, 2025

Deep Learning Interpretability for Rough Volatility

DerivativesDeep Learning Interpretability for Rough Volatility

A group of Cambridge, Imperial College, Alan Turing Institute, and Mediobanca researchers, Damiano Brigo, Bo Yuan, Jack Jacquier, and Nicola Pede, have recently published a preprint entitled Deep Learning Interpretability for Rough Volatility. Here is a non-technical overview of this preprint.

Understanding Deep Learning Interpretability in Finance: A Focus on Rough Volatility Models

In finance, accurate modeling of market dynamics is essential for pricing and managing risks in investments. Traditional models, while useful, often fail to capture certain market features, like the “rough” patterns in volatility that are evident in equity and foreign exchange markets. These limitations have led to the adoption of rough volatility models, which provide a more precise depiction of market behaviors. However, these advanced models bring computational challenges, especially in pricing and calibration. This paper explores how deep learning can help overcome these challenges while addressing the critical need for interpretability in such “black box” methods.

Rough Volatility and the Role of Neural Networks

Rough volatility models differ from traditional approaches by incorporating fractional Brownian motion, which better reflects real-world market irregularities. Among these, the “rough Heston model” is widely regarded for its ability to replicate observed market behaviors. However, calibrating these models—matching them to real-world data—can be resource-intensive.

This study employs deep neural networks (DNNs) to streamline this calibration process. DNNs are trained to predict the rough Heston model parameters based on observed market data, such as implied volatility patterns. This is faster and more adaptable than traditional methods, but the complexity of neural networks raises concerns about interpretability—understanding how inputs relate to outputs.

Unpacking the Black Box

To ensure that DNNs can be trusted in financial applications, the authors conducted a detailed interpretability analysis using various tools:

  1. Local Interpretability: Methods like LIME (Local Interpretable Model-agnostic Explanations) analyze how individual predictions are made by approximating the DNN locally with simpler models.
  2. Global Interpretability: Techniques such as SHAP (Shapley Additive Explanations) assess the overall importance of different input features (e.g., volatility at specific maturities and strikes) in the model’s predictions.

The findings reveal that short-maturity and in-the-money volatilities significantly influence model outputs, consistent with financial theory. This understanding bridges the gap between the “black box” nature of DNNs and traditional financial intuition.

Implications and Future Directions

The paper highlights that simpler neural network architectures can achieve high accuracy in calibrating rough volatility models. However, challenges remain, such as addressing outliers in predictions and improving interpretability for all model parameters. The authors advocate for further research using real market data to refine these methods.

By combining the computational power of DNNs with robust interpretability tools, this research paves the way for safer and more efficient use of machine learning in quantitative finance. It underscores the importance of transparency in adopting advanced technologies for critical financial decisions.

Preprint

SSRN: https://lnkd.in/eq4Zpwsh
ArXiv: https://lnkd.in/ej_Nxgcx

Previous paper on interpretability for normal Heston:

SSRN: https://lnkd.in/dNY-NtN
ArXiv: https://lnkd.in/e4_-dXHD

Check out our other content

Check out other tags:

Most Popular Articles