Table of Links
3.3 Calibration of Market Model Parameters
6. Significance, Acknowledgments, and References
3.3 Calibration of Market Model Parameters
We use simulation-based inference in order to infer the parameter sets which most accurately match the features constructed by the embedding network. In order to verify that our simulation output remains realistic, we also compare our results against the stylised facts but do not use these data to train our networks. Simulationbased inference is a means of inferring the posterior probability distribution of a set of parameters without computing the likelihood, which is typically analytically intractable. It includes methods that use neural networks to perform this inference. We describe this in more detail below.
Simulation-based, or likelihood-free, methods avoid the need to calculate the likelihood function in Equation 9 by instead sampling from the joint distribution of simulation output and parameters [14]. These methods include ABC that, in its simplest form, samples from the joint distribution, π (π, x) = π (π)π (x) whilst keeping only those values that reproduce historical values within some tolerance, π. More recently, approaches that leverage density estimation techniques in deep learning, such as mixture density networks and normalising flows, have been shown to be both more efficient and accurate than ABC methods [14]. These methods include neural posterior estimation (NPE), neural likelihood estimation (NLE), and neural ratio estimation (NRE) [24, 27, 32]. In this work we focus only on NPE.
3.3.2 Neural Density Estimators. Simulation-based inference estimates the posterior distribution of parameters by sampling from the joint distribution of simulation output and parameter sets. To do so, we use amortized variational inference, which converts the problem of approximating a probability density into a more tractable optimisation problem. Namely, we use neural density estimators, where the posterior is the target density that we seek to estimate and the simulator is the source of the training data for the network. In this work we use normalising flows, specifically neural spline flows (NSF) and masked-autoregressive flows (MAFs) [16, 29, 33]. For further details on neural density estimation, we refer the interested reader to the discussions in [31].
Authors:
(1) Namid R. Stillman, Simudyne Limited, United Kingdom ([email protected]);
(2) Rory Baggott, Simudyne Limited, United Kingdom ([email protected]);
(3) Justin Lyon, Simudyne Limited, United Kingdom ([email protected]);
(4) Jianfei Zhang, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(5) Dingqiu Zhu, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(6) Tao Chen, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(7) Perukrishnen Vytelingum, Simudyne Limited, United Kingdom ([email protected]).
This paper is available on arxiv under CC BY 4.0 DEED license.