Quant research in 2025 is no longer about finding the “right model,” but about building coherent systems that integrate mathematics, data, execution, and uncertainty – while knowing precisely where they can break.
Quantitative finance in 2025 has moved beyond a search for ever more refined closed-form models and into a system-level view of markets. Research now treats markets as interacting, data-rich, and path-dependent systems rather than collections of independent assets. This is reflected in the rise of market generators, particle simulations, and scenario-based methods that aim to reproduce entire joint market dynamics, including tail events and liquidity effects. At the same time, there is a renewed respect for empirical laws—such as the square-root impact law—that show stable, universal patterns can still emerge from complex, adaptive market behavior when studied with sufficiently rich data and rigor.
Artificial intelligence is now embedded rather than idolized. Machine learning, generative models, and reinforcement learning are used selectively, where they align with financial structure and economic meaning. Instead of replacing classical ideas, these tools extend them: Markowitz-style portfolio theory is enhanced with ML, hedging is performed via data-driven scenario generation, and reinforcement learning is applied to genuinely sequential decision problems. Crucially, interpretability, robustness, and validation have become central concerns, with explicit attention paid to when models fail, how synthetic data can mislead, and why evaluation must be tied to the intended financial application rather than abstract statistical fit.
Finally, risk and execution have become central organizing principles rather than secondary considerations. Risk is treated as dynamic, time-consistent, and endogenous to trading itself, leading to new frameworks for dynamic risk budgeting, execution-aware portfolio choice, and robust pricing under model uncertainty. Liquidity, transaction costs, internalization, and market microstructure are built directly into optimal strategies. Overall, the field in 2025 is more mature and self-aware: it seeks not perfect models, but coherent, transparent systems that integrate data, theory, and market realities—while being explicit about their limits.
Here is a summary of some of the 2025 publications that our team has found particularly interesting, in no particular order.
Universal Approximation Theorem and Error Bounds for Quantum Neural Networks and Quantum Reservoirs
Lukas Gonon and Antoine Jacquier
Neural networks work because, in theory, they can learn to imitate almost any function we care about — this idea is known as a universal approximation theorem. It explains why today’s AI systems are so flexible and powerful.
Recent research shows that a similar idea holds for quantum computers. Instead of using traditional neural networks, we can use quantum circuits whose adjustable settings play the role of neural network weights. These quantum systems can also learn to approximate ordinary mathematical functions.
In this work, the authors go further by giving precise guarantees on how accurate such quantum neural networks can be. They also study a newer and simpler type of model where the quantum circuits are partly random — similar to how reservoir computing works in classical AI, where only a small part of the system is trained.
The key result is striking: to reach a desired accuracy level, a quantum neural network only needs a moderate number of adjustable parameters and a very small number of qubits.
In practical terms, this means that even relatively small quantum computers could, in principle, perform meaningful learning tasks, at least for a broad class of smooth functions. This strengthens the case that quantum machine learning is not just theoretically possible, but potentially efficient and scalable.
Tail-GAN: Learning to Simulate Tail Risk Scenarios
Rama Cont, Mihai Cucuringu, Renyuan Xu, and Chao Zhang
Managing financial risk requires imagining many possible futures for markets — especially extreme ones where losses can be large. This is difficult because modern portfolios often contain many assets whose prices move together in complex ways, particularly during market stress.
This research introduces a new, data-driven method for generating realistic market scenarios for portfolios with many assets. The focus is not just on average behavior, but on tail risk — the rare but severe losses that matter most to risk managers, regulators, and investors.
The authors use a modern AI technique called a Generative Adversarial Network (GAN), which learns to create realistic simulations by competing against itself. Crucially, the model is designed to accurately reproduce two key financial risk measures — Value-at-Risk and Expected Shortfall — which describe how bad losses can get in extreme situations.
Tests on both simulated data and real market data show that this approach produces much more reliable risk scenarios than earlier methods. It works well not only for simple portfolios, but also for complex trading strategies that change over time, and it continues to perform well even when applied to new, unseen data.
To handle portfolios with very large numbers of assets, the method can be combined with a standard dimensionality-reduction technique, allowing it to scale efficiently without losing important risk information.
The work shows that AI can be used to simulate financial markets in a way that faithfully captures extreme risks, making it a powerful tool for stress testing, portfolio management, and financial stability analysis — far beyond the simplified models traditionally used in practice.
Data-driven hedging with generative models
Rama Cont and Milena Vuletić
Hedging is how investors protect themselves against market risk, often by using mathematical models that estimate how prices will react to small changes in the market. While effective in theory, these traditional methods depend heavily on assumptions that can break down in real markets — especially during volatile periods.
This research proposes a different approach. Instead of relying on simplified pricing models, it uses artificial intelligence trained directly on historical market data to learn how markets actually behave. The method employs a generative model — a type of AI that can create realistic market scenarios — conditioned on current market conditions.
By simulating many plausible futures, the system automatically chooses hedge positions that reduce risk across all scenarios. Importantly, it:
- accounts for transaction costs,
- selects the most effective hedging instruments, and
- adapts as market conditions change.
The authors demonstrate the approach using option portfolios, where risk is notoriously difficult to manage. They rely on a specialized AI model that learns the behavior of entire volatility surfaces — a key object in options markets — rather than focusing on just a few parameters.
The results are striking. When tested on new market data, the AI-based hedging strategy performs as well as or better than standard hedging methods (such as delta or delta-vega hedging). Even more impressively, the model continues to work well for over four years without needing to be retrained, suggesting strong robustness and practical usability.
In short, this work shows that data-driven AI methods can hedge financial risk more reliably than traditional formulas, offering a flexible and realistic alternative for managing complex portfolios in real markets.
Market Generators: A Paradigm Shift in Financial Modeling
Blanka Horvath, Jonathan Plenk, Milena Vuletić, and Raeid Saqur
Financial markets are often modeled using mathematical formulas that try to describe how prices move. While useful, these traditional models struggle to capture the true complexity of real markets — especially during periods of stress or unusual behavior.
A new approach is emerging: Market Generators. These are AI systems based on neural networks that learn directly from historical market data. Instead of relying on fixed assumptions, they learn the patterns and probabilities underlying market behavior and can then generate realistic, synthetic market scenarios that resemble real-world outcomes.
Although the terms Market Generator and Market Simulator only became common around 2019, the field has grown rapidly. Today it represents a distinct and fast-moving area of financial modeling, driven by major advances in machine learning and computing power. Research activity and innovation in this space are accelerating at a remarkable pace.
This movement is part of a much broader technological shift. The same generative techniques used to simulate markets are closely related to those behind large AI systems such as modern language models. These systems have demonstrated an unprecedented ability to learn structure from data and create new, convincing outputs — whether text, images, or financial scenarios.
In essence, Market Generators signal a transition from hand-crafted financial theories to data-driven market simulation, mirroring the wider transformation underway across science, industry, and technology. As these tools mature, they are poised to reshape how risk is analyzed, strategies are tested, and financial systems are understood.
Reinforcement Learning for Asset and Portfolio Management
Petter N. Kolm and Gordon Ritter
Reinforcement learning is a branch of artificial intelligence that focuses on learning by experience. Instead of solving a problem in one step, it learns a strategy — called a policy — by repeatedly interacting with an environment, observing outcomes, and improving over time. This makes it especially suitable for financial problems where decisions unfold sequentially and today’s actions affect tomorrow’s opportunities.
This article explains how reinforcement learning can be applied to asset and portfolio management, including trading, hedging, and long-term investment decisions. Unlike traditional financial optimization methods, which typically assume static conditions, reinforcement learning naturally handles changing markets, transaction costs, delays, and feedback effects.
The authors review a range of real-world use cases, showing where reinforcement learning has already demonstrated promising results — for example, in adaptive trading strategies and dynamic portfolio allocation. At the same time, they are clear about the challenges: reinforcement learning systems can be hard to train, sensitive to data quality, and difficult to validate in high-stakes financial environments.
Rather than presenting RL as a silver bullet, the article focuses on practical lessons for portfolio managers and traders. It explains what reinforcement learning is genuinely good at, where caution is required, and how it should be interpreted alongside existing financial intuition and models.
The article concludes by looking ahead. It suggests that the most effective future systems will likely combine reinforcement learning with traditional financial models, make use of richer data and realistic market simulators, and continuously learn and adapt as markets evolve.
Overall, the message is balanced and pragmatic: reinforcement learning has the potential to transform financial decision-making, but its real value lies in thoughtful integration with established practice, not wholesale replacement.
The Universal Law Behind Market Price Swings
Jean-Philippe Bouchaud
Many people believe that financial markets are too chaotic and driven by human psychology to obey real “laws,” unlike physics. This article argues the opposite — and presents strong evidence to support that claim.
Using an exceptionally detailed dataset covering every single trade on the Tokyo Stock Exchange over eight years, researchers found a simple and universal mathematical rule that governs how prices move when stocks are bought or sold.
The rule — known as the square-root law — says that when someone trades a certain amount of an asset, the resulting price change grows with the square root of the trade size, not proportionally. In plain terms, doubling the size of a trade does not double its impact on price; it increases it by much less. This relationship holds across all stocks, regardless of their size, liquidity, or who is trading them.
This finding is important for several reasons:
- It confirms that universal patterns can emerge in markets, even though individual traders behave unpredictably.
- It validates earlier hints of this law using much smaller or proprietary datasets.
- It shows that market behavior can be studied with the same empirical rigor as physics — using large datasets, reproducible methods, and precise testing.
The researchers were also able to rule out several alternative explanations that suggested the effect might be a statistical illusion. Instead, the results point to a deeper mechanism related to how liquidity (the willingness to buy or sell at nearby prices) is distributed around the current market price. As trades push prices away from their starting point, available liquidity increasingly resists further movement — naturally producing the square-root effect.
Why does this matter? Because price impact is the main way trading decisions turn into market movements. Understanding it better helps explain why markets are usually stable — yet sometimes crash suddenly and violently.
Overall, this work shows that financial markets are not lawless. Beneath the noise of daily trading lies a robust, universal structure — one that emerges from the collective behavior of thousands of participants and may help us better understand both market efficiency and fragility.
Synthetic Data for Portfolios: A Throw of the Dice Will Never Abolish Chance
Adil Rengim Cetingoz, Charles-Albert Lehalle
Simulating financial markets has long been essential for managing portfolios and understanding risk. In recent years, generative AI models — systems that learn directly from data and create realistic new scenarios — have attracted a lot of excitement, especially after their success in areas like language and image generation. Yet in finance, their real-world adoption has lagged behind the hype.
This paper explains why.
The authors show that financial markets pose unique challenges that generic generative models often fail to handle properly. One key issue is data scarcity: if a model is trained on limited historical data, generating vast amounts of artificial data can actually amplify errors rather than reduce uncertainty. More simulated data does not automatically mean better insight.
Another important finding is a mismatch between what generative models learn and what portfolio managers actually care about. Standard models tend to focus on reproducing average statistical features, while many investment strategies — especially long–short portfolios — depend critically on subtle relationships and relative differences that these models often ignore. In other words, the model may look realistic, yet be useless for decision-making.
To address these problems, the paper proposes a carefully designed simulation pipeline for generating multi-asset returns. This approach:
- respects well-known empirical patterns seen in real financial data,
- performs well when tested on a large universe of US stocks,
- and avoids the most common pitfalls of naive data generation.
The authors also emphasize that evaluation matters as much as modeling. They argue that traditional validation methods can miss serious flaws. As an alternative, they propose a powerful stress test: retraining a model on the data it has generated itself. If performance collapses, the model is likely unsuitable for the intended application — a warning sign known in statistics as a lack of identifiability.
The paper delivers a sober message: generative models can be useful in finance, but only if they are designed, tested, and evaluated with the specific financial task in mind. Blindly applying fashionable AI techniques is not enough — realism must be judged by whether a model supports sound portfolio and risk decisions.
Optimal Liquidation With Signals: The General Propagator Case
Eduardo Abi Jaber, Eyal Neuman
When large investors buy or sell assets, their trades can move market prices — sometimes briefly, sometimes with effects that linger. Managing this price impact is a central challenge when liquidating large positions without causing unnecessary losses.
This article studies how an investor should optimally sell assets when their trades temporarily distort prices and when those distortions fade over time rather than disappearing instantly. The authors also allow the trader to make use of predictive information — signals that give hints about where prices may move next.
The problem is framed as a careful balancing act:
- selling too fast pushes prices against the trader,
- selling too slowly increases exposure to market risk.
Using advanced mathematical tools, the authors derive exact formulas for the best possible trading strategy under these conditions. Rather than relying on trial-and-error simulations, they obtain solutions that precisely describe when and how much to trade, taking into account both price impact and risk.
A key strength of the results is their practical usability. The final formulas can be implemented efficiently on a computer and work even for realistic market conditions where price impact behaves irregularly — including cases where market reactions follow long-memory or power-law patterns often observed in real trading data.
In short, the work provides a rigorous yet practical framework for how large trades should be executed in real markets, showing how traders can systematically reduce costs and risk while exploiting available market signals. It bridges the gap between highly abstract theory and actionable trading strategies used by professional investors.
Dynamic Portfolio Choice with Intertemporal Hedging and Transaction Costs
Johannes Muhle-Karbe, James Sefton, Xiaofei Shi
Investors don’t operate in a frictionless world. Trading costs money, prices move unpredictably, and portfolios can’t be reshaped instantly without consequences. This article explains how a rational investor should behave when returns are partly predictable but trading is expensive.
The central idea is intuitive: instead of constantly jumping to a new “best” portfolio, investors should move gradually toward a target portfolio at a steady pace.
That target portfolio is the one the investor would choose in an ideal world with no trading costs — but adjusted for reality. Expected returns are scaled down to reflect trading costs, and risks are adjusted to reflect execution risk, meaning the danger of holding assets that are costly to trade when markets become volatile.
How fast the investor moves toward this target is just as important as the target itself. Trading too quickly leads to high costs; trading too slowly leaves the investor exposed to unwanted risk. The optimal trading speed balances these forces and determines how the existing portfolio inherited from the past is transformed into the desired one over time.
Unlike simpler investment rules that rebalance independently each period, the target portfolio here also prepares for future changes in market opportunities. In other words, it hedges against shifts in how attractive investments may become. Crucially, the choice of the target portfolio and the trading speed are tightly linked and both depend on execution risk.
The authors show that these ideas apply across two realistic perspectives:
- one focused on absolute gains and losses, and
- another where decisions naturally scale with wealth and price levels.
To make the ideas concrete, they illustrate the framework with a practical example in which return forecasts combine short-term momentum signals with long-term value signals.
Option market-making and vol arbitrage. The agent’s view is factored in to a realised-vs-implied vol model
Vladimir Lucic and Alex Tse
Market makers play a crucial role in options markets by continuously quoting prices at which they are willing to buy and sell. To do this well, they must manage risk carefully — especially because option prices depend heavily on volatility, which can shift suddenly and unpredictably.
This work introduces a new way for an options market maker to trade based on their own view of volatility, compared with what the broader market is implying through option prices. If a trader believes volatility is mispriced, they can potentially profit from this difference — a practice known as volatility arbitrage.
The authors derive a practical trading strategy that tells the market maker where to place their buy (bid) and sell (ask) prices. These prices automatically adjust depending on:
- how attractive the expected volatility-related profit is, and
- how much risk the trader is willing to take.
A key strength of the approach is flexibility. The strategy allows the trader to control risk not just in aggregate, but with respect to specific risk factors they care about — for example, sensitivity to certain shapes or shifts in the volatility surface. By doing so, the market maker becomes more robust to unexpected market moves that could otherwise lead to large losses.
In simple terms, the paper shows how an options trader can systematically quote prices that reflect both opportunity and caution, using a clear mathematical framework that is practical to implement. The result is a more resilient market-making strategy that adapts intelligently to changing volatility conditions rather than reacting blindly after the fact.
Risk Budgeting Allocation for Dynamic Risk Measures
Silvana M. Pesenti, Sebastian Jaimungal, Yuri F. Saporito, Rodrigo S. Targino
Investors often want to spread risk evenly across a portfolio, rather than simply spreading money evenly across assets. This idea — known as risk budgeting — helps ensure that no single asset or strategy dominates overall risk. However, most traditional approaches measure risk in a static way, ignoring how risk evolves over time.
This work introduces a new framework for risk budgeting that adapts dynamically as markets change. Instead of using fixed, one-shot risk measures, it relies on time-consistent risk measures that update sensibly from one period to the next.
To make this possible, the authors extend the classic idea of “risk contributions” — which describe how much each asset adds to total risk — into a dynamic setting. These dynamic risk contributions can be computed step by step over time, allowing the portfolio to rebalance in a controlled and consistent way.
A major theoretical result shows that, for a broad and important class of risk measures, the complex dynamic allocation problem can be broken down into a sequence of well-behaved optimization problems. This guarantees stable, unique solutions and makes the strategy practical to implement. The authors also show that realistic, self-financing portfolios naturally emerge from this framework.
Finally, the paper demonstrates how modern AI techniques can be used to compute these strategies efficiently. By exploiting special statistical properties of the chosen risk measures, the authors design a deep-learning “actor–critic” algorithm that learns how to allocate risk over time directly from data.
In essence, the work bridges rigorous risk theory and modern machine learning, offering a principled way to build portfolios that manage risk dynamically rather than reactively — an approach well suited to today’s fast-moving financial markets.
Interpretability in deep learning for finance: A case study for the Heston model
Damiano Brigo, Xiaoshan Huang, Andrea Pallavicini, Haitz Sáez de Ocáriz Borde
Deep learning is increasingly used in finance because it can handle complex patterns that traditional models struggle with. However, neural networks are often criticized as “black boxes”: they produce results, but it is hard to understand why they do so. This lack of transparency creates risks, especially in fields like finance where decisions must be validated, explained, and trusted.
This study addresses that problem by focusing on a well-known financial model — the Heston model — which describes how market volatility behaves. Because the Heston model is thoroughly understood, it provides an ideal testing ground for studying how deep-learning systems learn and make predictions.
The authors examine whether tools borrowed from game theory can be used to explain neural network behavior. These tools treat each input variable as a “player” in a cooperative game and measure how much each one contributes to the final output. The study compares different explanation methods and finds that global explanation techniques, especially Shapley values, are both practical and effective at revealing how the network maps inputs to outputs.
Beyond interpretation, the analysis delivers an additional insight: explanation tools can help guide model design choices. The authors show that, for this task, simple fully connected neural networks outperform more complex convolutional architectures — both in predictive accuracy and in interpretability.
Overall, the work demonstrates that deep-learning models in finance do not have to remain opaque. With the right interpretability tools, they can be understood, validated, and improved — making them safer and more trustworthy for real-world financial applications.
Numerical analysis of a particle system for the calibrated Heston-type local stochastic volatility model
Modern finance relies heavily on computer simulations to understand risk and price complex products like options. One widely used framework is the Heston model, which captures how both prices and volatility evolve over time. In practice, this model is often enhanced to better match real market data, resulting in more realistic — but also more mathematically challenging — versions.
This study examines a sophisticated simulation technique based on Monte Carlo “particle” methods, where many simulated market paths interact with each other. Because the model is calibrated using market data in real time, the simulated paths are no longer independent; instead, they influence one another through their collective behavior. This makes the mathematics much harder and raises questions about whether the simulations are reliable at all.
The authors address these concerns head-on. They prove that, under realistic conditions, the underlying mathematical model is well defined and that the particle simulations behave as they should. In particular, as the number of simulated paths increases, the system converges toward the intended market model — a property known as propagation of chaos, which is essential for trusting large-scale simulations.
The paper also studies the numerical algorithms used to run these simulations on a computer. It shows that standard, efficient time-stepping methods do converge reliably, even in the presence of irregular volatility behavior, and quantifies how fast this convergence occurs. These theoretical results are backed up by numerical experiments that confirm the predictions in practice.
In simple terms, the work provides strong mathematical reassurance that a widely used and highly realistic market simulation approach is both stable and accurate. This makes it safer to use in applications such as option pricing, risk management, and stress testing — areas where unreliable simulations could lead to costly mistakes.
Robust Pricing and Hedging of American Options in Continuous Time
Ivan Guo, Jan Obłój
American options are financial contracts that give their holder the right to exercise at any time before expiration, making them more flexible — and harder to value and hedge — than standard European options. This difficulty becomes even greater when markets are uncertain and the true behavior of prices is not known precisely.
This work studies how to price and hedge American options robustly, meaning in a way that remains valid even when we do not fully trust any single market model. Instead of assuming one precise description of how prices move, the authors allow for a whole range of possible market behaviors, constrained only by reasonable bounds on volatility.
A central result of the paper is a strong theoretical guarantee: the price of an American option under this uncertainty matches exactly the cost of hedging it. This is known as a pricing–hedging duality, and it reassures practitioners that the prices they compute are neither too optimistic nor too conservative.
The authors go further by considering markets where simpler options (European options) are already traded and have known prices. They show that the same robust pricing principles still apply, even when these options can be traded dynamically alongside the American option — a much more realistic market setting.
To achieve these results, the paper uses advanced probabilistic techniques, including clever ways of randomizing exercise decisions and mathematically linking different market scenarios. One particularly useful insight is that American options can be reinterpreted as European options in a suitably expanded setting, which greatly simplifies analysis and computation.
In essence, the work shows that even in highly uncertain markets, there is a sound and consistent way to price and hedge flexible financial products. This strengthens the theoretical foundations of modern risk management and provides confidence that robust methods can be used safely when traditional assumptions about markets are unreliable.
Implementing AI Foundation Models in Asset Management: A Practical Guide
Francesco A. Fabozzi and Marcos López de Prado
Large language models — the AI systems behind today’s conversational assistants — are beginning to reshape how investment firms work. This article explains, in practical and accessible terms, how asset management organizations can use these tools effectively and responsibly, without requiring technical expertise.
Rather than focusing on how the technology works internally, the article concentrates on how to put it to use. It discusses where large language models can add value across an investment firm, including research, internal operations, and communication with clients. Examples include helping analysts explore ideas more efficiently, supporting documentation and reporting, and improving how insights are explained to stakeholders.
A major emphasis is placed on good implementation practices. This includes developing strong “prompt literacy” (knowing how to ask AI the right questions), setting clear internal policies, and ensuring coordination between investment teams, technology staff, compliance, and management. The article stresses that successful adoption is as much an organizational challenge as a technical one.
Importantly, the discussion is grounded in the realities of asset management, such as accountability, regulatory expectations, and the need for human oversight. The goal is not automation for its own sake, but augmenting professional judgment while managing risk and responsibility.
Overall, the article serves as a practical guide for professionals and educators who want to understand how large language models can be thoughtfully integrated into investment processes — helping firms work more efficiently, communicate more clearly, and innovate without losing control or trust.
FX Market Making with Internal Liquidity
Alexander Barzykin, Robert Boyce, Eyal Neuman
Foreign exchange (FX) markets are changing, and many large financial institutions now offer internal trading venues, where client orders can be matched inside the firm rather than sent straight to the wider market. These so-called internal liquidity pools can reduce costs and improve efficiency — but they also introduce new strategic questions for market makers.
Market makers in this setting act as principals: they may choose to fill client orders themselves as part of their own risk management, or instead adjust prices in the broader market to encourage trades that offset client flow. Deciding what to do is not straightforward. The best strategy depends on market conditions, the firm’s appetite for risk, and how clients’ trading algorithms behave.
This paper studies how a market maker should optimally manage internal liquidity when faced with multiple, sometimes competing objectives — such as managing risk, earning profits, and meeting clients’ expectations about execution speed and reliability. Importantly, the market maker’s decisions affect not only their own outcomes, but also other liquidity providers who rely on predictable execution behavior.
By analyzing this problem in a unified framework, the authors identify key qualitative insights that help explain how internal liquidity strategies should adapt as conditions change. These insights clarify when it makes sense for a market maker to absorb client trades, when to pass risk to the external market, and how internal pricing should respond.
Overall, the work sheds light on a behind-the-scenes aspect of modern FX markets, helping explain how internal trading venues function and how thoughtful strategy design can improve both market efficiency and execution quality in real-world trading environments.
Enhancing Markowitz’s portfolio selection paradigm with machine learning
Marcos López de Prado, Joseph Simonian, Francesco A. Fabozzi and Frank J. Fabozzi
Modern portfolio management is built on ideas introduced decades ago, most notably the principle of balancing risk and return first formalized by Harry Markowitz. While these foundations remain sound, today’s financial markets are far more complex, data-rich, and fast-moving than those early models were designed for.
This paper explains how machine learning can be integrated into the traditional portfolio-selection framework to make it more robust and better suited to modern markets. Rather than replacing classical financial theory, the approach combines it with advanced data-driven techniques that can detect patterns and relationships too complex for standard methods.
By blending econometric models with machine learning, portfolio managers can improve several key tasks:
- identifying potential sources of return (alpha),
- managing and forecasting risk more accurately, and
- optimizing portfolios using sophisticated risk measures that focus on extreme losses, not just average volatility.
Machine learning’s ability to process large and complex datasets allows portfolios to adapt more dynamically as market conditions change. However, the paper also takes a realistic view, discussing the practical challenges involved — including implementation complexity, data quality issues, and the need for careful validation and oversight.
Overall, the message is balanced and pragmatic:
machine learning can significantly enhance traditional portfolio management, but its real value comes from thoughtful integration with established financial principles, not from abandoning them.
Optimal control of the non-linear Fokker-Planck equation
Ben Hambly, Philipp Jettkant
Many complex systems — from financial markets to crowds or interacting particles — can be described not by tracking every individual element, but by studying how the overall distribution of behavior evolves over time. This paper focuses on such a description, using a mathematical framework that models how large groups influenced by randomness and mutual interaction change collectively.
The authors study how such a system can be steered or controlled. In their setting, a decision-maker (for example, a regulator or policymaker) can influence the system gradually, with the goal of reducing some overall cost — such as instability, risk, or inefficiency — while the system itself is constantly affected by random shocks.
They first show that the mathematical model is well behaved: it has meaningful solutions, and optimal control strategies actually exist. They then derive a set of precise conditions — a kind of rulebook — that tells us when a control strategy is truly optimal. This is done through a powerful principle that plays a role similar to “first-order conditions” in classical optimization, but adapted to uncertain, evolving systems.
A key contribution is showing how this population-level control problem connects directly to the behavior of a representative individual within the system. This link makes the theory both more intuitive and more practical, since it allows complex collective dynamics to be understood through simpler individual-based models.
The results go beyond what was previously known by handling nonlinear interactions and randomness simultaneously, even producing new insights in simpler, non-random cases. To make the ideas concrete, the paper illustrates how the framework can be applied to government interventions in financial systems, showing how policy actions can be designed to stabilize markets under uncertainty.
In short, the work provides a rigorous foundation for controlling complex, noisy systems at the population level, with implications for finance, economics, and beyond — wherever collective behavior must be guided rather than micromanaged.
Counterexamples for FX Options Interpolations – Part I
Jherek Healy
While more of a research note, this particular work has practical significance.
Foreign exchange (FX) options are widely used to manage currency risk, and a crucial part of valuing and managing them is interpolating prices or volatilities — that is, smoothly filling in missing values between known market quotes.
This article shows that some of the commonly used interpolation methods in FX options can fail in subtle but important ways. The authors present concrete counterexamples where popular techniques produce misleading or inconsistent results, even though they may appear reasonable on the surface.
Why does this matter? Because these interpolations are not just used for pricing simple (“vanilla”) options. They also form the backbone of more complex pricing models for exotic derivatives, especially those based on local or stochastic volatility. If the interpolation is flawed, errors can propagate through the entire risk management system.
The key message is cautionary:
methods that are widely accepted in practice may still break down under certain conditions, leading to incorrect risk estimates and potential mispricing.
By identifying these failures explicitly, the article helps practitioners understand where existing tools are unreliable and encourages more robust approaches when building volatility surfaces and managing FX option risk.
In short, the work highlights that small technical assumptions in financial models can have large real-world consequences, especially in markets where precision and consistency are critical.
Arithmetic Average RFR Cap and Floor Valuation With the SABR Model
Patrick Hagan and Georgios Skoufis
Interest rates are increasingly based on risk-free reference rates (RFRs), which are calculated from daily overnight rates. Financial products tied to these rates — such as caps and floors that protect against rates rising too high or falling too low — can be surprisingly complex to value because they depend on daily averages over time, not just a single future rate.
This article presents a clean and robust way to price such products without relying on a specific market model. The authors derive a general pricing formula for caps and floors based on the arithmetic average of daily RFRs. Instead of approximations, the method rewrites the problem in terms of simpler building blocks that traders already understand and actively trade.
The key insight is that the complex payoff can be decomposed into:
- standard (“vanilla”) RFR caps and floors, and
- additional terms that correctly account for the curvature (or convexity) created by daily compounding.
This decomposition makes the pricing transparent and practical, because it relies only on instruments already quoted in the market.
The authors then show how the formula works in a widely used interest-rate model, where it becomes fully explicit and easy to compute using observable market prices. They also verify an important consistency check — known as call–put parity — which confirms that the pricing framework is internally sound.
In simple terms, the paper shows how a seemingly complicated interest-rate product can be priced accurately, transparently, and consistently using familiar market instruments. This is particularly valuable in modern interest-rate markets, where robustness and model independence are essential for risk management and regulatory confidence.
Learning with Expected Signatures: Theory and Applications
Lorenzo Lucchese, Mikko S Pakkanen, and Almut ED Veraart
Many modern datasets come in the form of streams over time — such as financial prices, sensor readings, or user activity logs. A central challenge is how to turn these complex, wiggly time series into a compact set of features that still captures what really matters.
This work focuses on a powerful mathematical tool called the expected signature, which transforms time-series data into a much smaller set of numbers while preserving an extraordinary amount of information. In fact, in theory, this transformation is rich enough to fully describe the underlying process that generated the data — without assuming any specific model. That makes it especially attractive for machine-learning applications where flexibility and robustness are important.
The paper provides new theoretical guarantees showing that the way the expected signature is computed from real, discrete data does in fact converge to its ideal, continuous-time version. This closes an important gap between theory and practice and gives a stronger foundation for using expected signatures in machine-learning systems.
Finally, the authors show that in a common and important case — when the data behaves like a “fair game” with no built-in trend — a small tweak to the method can significantly improve accuracy. Tests on real data confirm that this improvement leads to better predictions. Overall, the work strengthens the case for expected signatures as a reliable, model-free way to learn from time-series data, with practical benefits for machine learning across many fields.
