paint-brush
End-to-End Solutions for Cryptocurrency Tradingby@reinforcement
112 reads

End-to-End Solutions for Cryptocurrency Trading

tldt arrow

Too Long; Didn't Read

FinRL's Use Case II demonstrates its prowess in optimizing portfolio allocation with DRL agents, achieving superior performance metrics like a Sharpe ratio of 2.36 and an annual return of 42.57%. Use Case III highlights FinRL's end-to-end solution for cryptocurrency trading, with the PPO algorithm showing a cumulative return of 103%.
featured image - End-to-End Solutions for Cryptocurrency Trading
Reinforcement Technology Advancements HackerNoon profile picture

Authors:

(1) Xiao-Yang Liu, Hongyang Yang, Columbia University (xl2427,[email protected]);

(2) Jiechao Gao, University of Virginia ([email protected]);

(3) Christina Dan Wang (Corresponding Author), New York University Shanghai ([email protected]).

Abstract and 1 Introduction

2 Related Works and 2.1 Deep Reinforcement Learning Algorithms

2.2 Deep Reinforcement Learning Libraries and 2.3 Deep Reinforcement Learning in Finance

3 The Proposed FinRL Framework and 3.1 Overview of FinRL Framework

3.2 Application Layer

3.3 Agent Layer

3.4 Environment Layer

3.5 Training-Testing-Trading Pipeline

4 Hands-on Tutorials and Benchmark Performance and 4.1 Backtesting Module

4.2 Baseline Strategies and Trading Metrics

4.3 Hands-on Tutorials

4.4 Use Case I: Stock Trading

4.5 Use Case II: Portfolio Allocation and 4.6 Use Case III: Cryptocurrencies Trading

5 Ecosystem of FinRL and Conclusions, and References

4.5 Use Case II: Portfolio Allocation

We reproduce a portfolio allocation strategy [21] that uses a DRL agent to allocate capital to a set of stocks and reallocate periodically.


FinRL improves the reproducibility by allowing users to easily compare the results of different settings, such as the pool of stocks to trade, the initial capital, and the model hyperparameters. It utilizes the agent layer to specify the state-of-the-art DRL libraries. Users do not need to redevelop the neural networks and instead they can just plug-and-play with any DRL algorithm.


Fig. 6 and Table 3 depict the backtesting performance on Dow 30 constituent stocks. The training and testing period is the same with Case I. It shows that each DRL agent, namely A2C [32], TD3 [14], PPO [42], and DDPG [26], outperforms the DJIA index and the min-variance strategy. A2C has the best performance with a Sharpe ratio of 2.36 and an annual return of 42.57%; TD3 is the second best agent with a Sharpe ratio of 2.28 and an annual return of 39.38%; PPO with a Sharpe ratio of 2.11 and an annual return of 36.17% and DDPG with a Sharpe ratio of 2.21 and an annual return of 36.01%. Therefore, using FinRL, users can easily compare the agents’ performance with each other and with the baselines.

4.6 Use Case III: Cryptocurrencies Trading

We use FinRL to reproduce [20] for top 10 market cap cryptocurrencies [1]. FinRL provides a full-stack development pipeline, allowing users to have an end-to-end walk-through of how to download market data using APIs, perform data preprocessing, pick and fine-tune DRL algorithms, and get automated backtesting performance.


Fig. 7 describes the backtesting performance on the ten cryptocurrencies with transaction cost. The training period is from 2021/10/01 to 2021/10/20 on a 5-minute basis, and the testing period is from 2021/10/21 to 2021/10/30. The portfolio with the PPO algorithm from the ElegantRL library has the highest cumulative return of 103%; Equally weighted portfolio strategy has the second highest cumulative return of 99%; BTC buy and hold strategy with a cumulative return of 93%. Therefore, the backtesting performance shows that FinRL successfully reproduce [20] with completeness and simplicity.


This paper is available on arxiv under CC BY 4.0 DEED license.


[1] The top 10 market cap cryptocurrencies as of Oct 2021 are: Bitcoin (BTC), Ethereum (ETH), Cardano (ADA), Binance Coin (BNB), Ripple (XRP), Solana (SOL), Polkadot (DOT), Dogecoin (DOGE), Avalanche (AVAX), Uniswap (UNI).