Configuring Reinforcement Learning Simulation: Agent Settings, Hyper-Parameters, & Market Insights

Written by reinforcement | Published 2025/01/01
Tech Story Tags: reinforcement-learning | simulation-configuration | agent-based-market-simulation | financial-market-modeling | continuous-double-auction | stylized-facts-in-finance | machine-learning-in-finance | rl-based-agents

TLDRThis section outlines simulation setups, detailing agent configurations for training, testing, and untrained groups. It includes setups for flash sales, informed LT simulations, and market characteristics, providing a comprehensive view of hyper-parameter impacts and experimental design.via the TL;DR App

This is the last part of the research paper “Reinforcement Learning In Agent-based Market Simulation: Unveiling Realistic Stylized Facts And Behavior”. Use the table of links below to navigate to the next part.

Table of Links

Part 1: Abstract & Introduction

Part 2: Important Concepts

Part 3: System Description

Part 4: Agents & Simulation Details

Part 5: Experiment Design

Part 6: Continual Learning

Part 7: Experiment Results

Part 8: Market and Agent Responsiveness to External Events

Part 9: Conclusion & References

Part 10: Additional Simulation Results

Part 11: Simulation Configuration

7.2 Simulation Configuration

Table 2 consists of all 14 agents’ configurations for groups of training, testing, and untrained. The hyper-parameters can be referenced in section 3.2.

Table 3 describes the detailed setups for the special simulations mentioned in Section 5.2 (Flash Sale and Informed LTs).

Table 4 shows the market characteristics of the simulations generated from different sets of hyper-parameters.

Table 5 shows the different setups for the simulation results.

Authors:

(1) Zhiyuan Yao, Stevens Institute of Technology, Hoboken, New Jersey, USA ([email protected]);

(2) Zheng Li, Stevens Institute of Technology, Hoboken, New Jersey, USA ([email protected]);

(3) Matthew Thomas, Stevens Institute of Technology, Hoboken, New Jersey, USA ([email protected]);

(4) Ionut Florescu, Stevens Institute of Technology, Hoboken, New Jersey, USA ([email protected]).


This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.


Written by reinforcement | Leading research and publication in advancing reinforcement machine learning, shaping intelligent systems & automation.
Published by HackerNoon on 2025/01/01