This story draft by @escholar has not been reviewed by an editor, YET.

Calibration of Market Model Parameters

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Table of Links

Abstract and 1. Introduction

2. Relevant Work

3. Methods

3.1 Models

3.2 Summarising Features

3.3 Calibration of Market Model Parameters

4. Experiments

4.1 Zero Intelligence Trader

4.2 Extended Chiarella

4.3 Historical Data

5. Discussion & Future Work

6. Significance, Acknowledgments, and References

3.3 Calibration of Market Model Parameters

We use simulation-based inference in order to infer the parameter sets which most accurately match the features constructed by the embedding network. In order to verify that our simulation output remains realistic, we also compare our results against the stylised facts but do not use these data to train our networks. Simulationbased inference is a means of inferring the posterior probability distribution of a set of parameters without computing the likelihood, which is typically analytically intractable. It includes methods that use neural networks to perform this inference. We describe this in more detail below.



Simulation-based, or likelihood-free, methods avoid the need to calculate the likelihood function in Equation 9 by instead sampling from the joint distribution of simulation output and parameters [14]. These methods include ABC that, in its simplest form, samples from the joint distribution, 𝑃 (πœƒ, x) = 𝑃 (πœƒ)𝑃 (x) whilst keeping only those values that reproduce historical values within some tolerance, πœ–. More recently, approaches that leverage density estimation techniques in deep learning, such as mixture density networks and normalising flows, have been shown to be both more efficient and accurate than ABC methods [14]. These methods include neural posterior estimation (NPE), neural likelihood estimation (NLE), and neural ratio estimation (NRE) [24, 27, 32]. In this work we focus only on NPE.


3.3.2 Neural Density Estimators. Simulation-based inference estimates the posterior distribution of parameters by sampling from the joint distribution of simulation output and parameter sets. To do so, we use amortized variational inference, which converts the problem of approximating a probability density into a more tractable optimisation problem. Namely, we use neural density estimators, where the posterior is the target density that we seek to estimate and the simulator is the source of the training data for the network. In this work we use normalising flows, specifically neural spline flows (NSF) and masked-autoregressive flows (MAFs) [16, 29, 33]. For further details on neural density estimation, we refer the interested reader to the discussions in [31].


Authors:

(1) Namid R. Stillman, Simudyne Limited, United Kingdom ([email protected]);

(2) Rory Baggott, Simudyne Limited, United Kingdom ([email protected]);

(3) Justin Lyon, Simudyne Limited, United Kingdom ([email protected]);

(4) Jianfei Zhang, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);

(5) Dingqiu Zhu, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);

(6) Tao Chen, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);

(7) Perukrishnen Vytelingum, Simudyne Limited, United Kingdom ([email protected]).


This paper is available on arxiv under CC BY 4.0 DEED license.


L O A D I N G
. . . comments & more!

About Author

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
EScholar: Electronic Academic Papers for Scholars@escholar
We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community

Topics

Around The Web...

Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks