paint-brush
How FinRL's Pipeline Enhances Trading Performance in Real-time Marketsby@reinforcement
111 reads

How FinRL's Pipeline Enhances Trading Performance in Real-time Markets

by Reinforcement Technology Advancements
Reinforcement Technology Advancements HackerNoon profile picture

Reinforcement Technology Advancements

@reinforcement

Leading research and publication in advancing reinforcement machine learning, shaping...

June 8th, 2024
Read on Terminal Reader
Read this story in a terminal
Print this story
Read this story w/o Javascript
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

FinRL's Training-Testing-Trading Pipeline revolutionizes financial training by integrating backtesting and live trading APIs, reducing the simulation-to-reality gap for more effective trading strategies in real-world markets.
featured image - How FinRL's Pipeline Enhances Trading Performance in Real-time Markets
1x
Read by Dr. One voice-avatar

Listen to this story

Reinforcement Technology Advancements HackerNoon profile picture
Reinforcement Technology Advancements

Reinforcement Technology Advancements

@reinforcement

Leading research and publication in advancing reinforcement machine learning, shaping intelligent systems & automation.

Learn More
LEARN MORE ABOUT @REINFORCEMENT'S
EXPERTISE AND PLACE ON THE INTERNET.
0-item

STORY’S CREDIBILITY

Academic Research Paper

Academic Research Paper

Part of HackerNoon's growing list of open-source research papers, promoting free access to academic material.

Authors:

(1) Xiao-Yang Liu, Hongyang Yang, Columbia University (xl2427,hy2500@columbia.edu);

(2) Jiechao Gao, University of Virginia (jg5ycn@virginia.edu);

(3) Christina Dan Wang (Corresponding Author), New York University Shanghai (christina.wang@nyu.edu).

Abstract and 1 Introduction

2 Related Works and 2.1 Deep Reinforcement Learning Algorithms

2.2 Deep Reinforcement Learning Libraries and 2.3 Deep Reinforcement Learning in Finance

3 The Proposed FinRL Framework and 3.1 Overview of FinRL Framework

3.2 Application Layer

3.3 Agent Layer

3.4 Environment Layer

3.5 Training-Testing-Trading Pipeline

4 Hands-on Tutorials and Benchmark Performance and 4.1 Backtesting Module

4.2 Baseline Strategies and Trading Metrics

4.3 Hands-on Tutorials

4.4 Use Case I: Stock Trading

4.5 Use Case II: Portfolio Allocation and 4.6 Use Case III: Cryptocurrencies Trading

5 Ecosystem of FinRL and Conclusions, and References

3.5 Training-Testing-Trading Pipeline

The "training-testing" workflow used by conventional machine learning methods falls short for financial tasks. It splits the data into training set and testing set. On the training data, users select features and tune parameters; then evaluate on the testing data. However, financial tasks will experience a simulation-to-reality gap between the testing performance and real-live market performance. Because the testing here is offline backtesting, while the users’ goal is to place orders in a real-world market.


FinRL employs a “training-testing-trading" pipeline to reduce the simulation-to-reality gap. We use historical data (time series) for the “training-testing" part, which is the same as conventional machine learning tasks, and this testing period is for backtesting purpose. For the “trading" part, we use live trading APIs, such as CCXT, Alpaca, or Interactive Broker, allowing users carry out trades directly in a trading system. Therefore, FinRL directly connects with live trading APIs: 1). downloads live data, 2). feeds data to the trained DRL model and obtains the trading positions, and 3). allows users to place trades.


Fig. 4 illustrates the “training-testing-trading” pipeline:


Step 1). A training window to retrain an agent.


Step 2). A testing window to evaluate the trained agent, while hyperparameters can be tuned iteratively.


Step 3). Use the trained agent to trade in a trading window.


Rolling window is used in the training-testing-trading pipeline, because the investors and portfolio managers need to retrain the model periodically as time goes ahead. FinRL provides flexible selections of rolling windows, such as monthly, quarterly, yearly windows, or by users’ specifications.


This paper is available on arxiv under CC BY 4.0 DEED license.


L O A D I N G
. . . comments & more!

About Author

Reinforcement Technology Advancements HackerNoon profile picture
Reinforcement Technology Advancements@reinforcement
Leading research and publication in advancing reinforcement machine learning, shaping intelligent systems & automation.

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite
Also published here
X REMOVE AD