Table of Links
3.3 Calibration of Market Model Parameters
6. Significance, Acknowledgments, and References
4 EXPERIMENTS
We use the same neural network architecture for both ZI trader model and extended Chiarella: we use four layers for the embedding network with hidden size of 64 and out dimension of 256, and three layers with hidden size of 128 for the neural density estimator. The batch size was 128 and we used a learning rate of 0.001 with plateau scheduler, drop-out rate of 0.1, and early stopping. On average we found training took approximately 80 epochs before converging. We conducted hyper-optimisation using a coarse grid search and fixed the model architectures based on the lowest average test loss for both models. We found that the choice in normalising flow was the only significant factor in differentiating performance across architectures used on either simulation model.
Authors:
(1) Namid R. Stillman, Simudyne Limited, United Kingdom ([email protected]);
(2) Rory Baggott, Simudyne Limited, United Kingdom ([email protected]);
(3) Justin Lyon, Simudyne Limited, United Kingdom ([email protected]);
(4) Jianfei Zhang, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(5) Dingqiu Zhu, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(6) Tao Chen, Hong Kong Exchanges and Clearing Limited, Hong Kong ([email protected]);
(7) Perukrishnen Vytelingum, Simudyne Limited, United Kingdom ([email protected]).
This paper is available on arxiv under CC BY 4.0 DEED license.