Table of Links
3.3 Calibration of Market Model Parameters
6. Significance, Acknowledgments, and References
ABSTRACT
The ability to construct a realistic simulator of financial exchanges, including reproducing the dynamics of the limit order book, can give insight into many counterfactual scenarios, such as a flash crash, a margin call, or changes in macroeconomic outlook. In recent years, agent-based models have been developed that reproduce many features of an exchange, as summarised by a set of stylised facts and statistics. However, the ability to calibrate simulators to a specific period of trading remains an open challenge. In this work, we develop a novel approach to the calibration of market simulators by leveraging recent advances in deep learning, specifically using neural density estimators and embedding networks. We demonstrate that our approach is able to correctly identify high probability parameter sets, both when applied to synthetic and historical data, and without reliance on manually selected or weighted ensembles of stylised facts.
1 INTRODUCTION
Most major financial markets for equities, commodities and currencies, as well as other asset classes, operate on public exchanges where individuals place orders to buy or sell an asset. These markets are typically hosted on a centralised exchange which provides a platform for traders to place different types of orders for a security. Limit orders are orders to buy or sell at a limit price and are stored in a record kept by the exchange, also known as the limit order book (LOB) while market orders are orders to buy or sell a volume immediately from the LOB [23]. The LOB is updated each time an order is placed, amended, or cancelled. During a trading day, the market goes through a number of phases e.g., the pre-open call auction or the continuous double auction (CDA). Each phase has protocols that define which orders can be placed, how the orders are handled, when and how the LOB clears, and how those trades are priced. The underlying principle of the LOB is shown in Figure 1.
Securities traded on an exchange exhibit many of the characteristic signatures of financial time series, including being nonstationary, non-linear, and stochastic [34]. Given the complexity of price dynamics, there is a strong need to explore the effect of counterfactual scenarios in order to minimise risk at the level of investor and exchange. These counterfactual scenarios include, for example, the market impact of an investor liquidating a very large position, a significant macroeconomic announcement such as labour market data, or a trading error such as a “fat-finger mistake" [19].
Market simulators, which seek to reproduce the behaviours of the exchange, have been developed to better evaluate the impact of these and many other scenarios. These simulators aim to reproduce the underlying data-generating process and, hence, can be thought of as a form of generative modelling. Traditional methods have used empirical observations and domain knowledge to build models that incorporate approximations to trader behaviours in closed-form equations. These methods are often built as agent-based models (ABM), whereby a set of individual agents (traders) interact with one another according to a pre-defined set of rules (exchange protocol) [20, 36]. We note that these models are typically structured such that the interaction network is many-to-one, i.e., all agents are connected to the exchange but not to each another. ABMs are appealing as they give an explicit means to both control and explain observations of market dynamics. For example, the strength in price fluctuations can be attributed to specific trader behaviours. Other methods, including those that rely on deep generative models, such as generative adversarial networks (GANs), are not as amenable to controlling for specific scenarios and typically rely on post-hoc analysis for explainability and control [11, 25, 26].
While ABMs are typically easier to control and interpret than equivalent deep generative methods, they suffer from the so-called “simulation-reality gap" of generative methods, which refers to the disconnect between simulated and historical data [15, 44]. This is typically due to the flexibility in model output which allows for a large variety of different scenarios to be generated but requires that parameters are appropriately constrained to best reproduce observations. Selecting the parameter sets that best match observations, also known as calibrating, remains a significant challenge to the deployment of market simulators as well as for the field of agent-based models more broadly. In this work, we present a novel approach for calibrating market simulators using simulation-based inference, which combines Bayesian inference with deep learning [14]. We demonstrate that our method is able to infer parameters with high accuracy whilst also providing the entire posterior distribution over parameters, namely the probability distribution, 𝑃 (𝜃 |x), for a set of parameters, 𝜃, conditioned on the set of observations, x. We use our method to calibrate two models of market simulations and infer parameters for historical data, demonstrating that we are able to reproduce many of the stylised facts observed in the data.
This paper is available on arxiv under CC BY 4.0 DEED license.
Authors:
(1) Namid R. Stillman, Simudyne Limited, United Kingdom ([email protected]);
(2) Rory Baggott, Simudyne Limited, United Kingdom ([email protected]);
(3) Justin Lyon, Simudyne Limited, United Kingdom ([email protected]);
(4) Jianfei Zhang, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(5) Dingqiu Zhu, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(6) Tao Chen, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(7) Perukrishnen Vytelingum, Simudyne Limited, United Kingdom ([email protected]).