paint-brush
Decreasing Latency for High Frequency Crypto Arbitrage Tradingby@petrufel
5,687 reads
5,687 reads

Decreasing Latency for High Frequency Crypto Arbitrage Trading

by Petr De-MonderikMarch 23rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Amazon allows you to do inter-region VPC peering to allow connectivity between your own private IPs in different regions. In order to interconnect your VPCs, I started from scratch. I created new Vpcs in the regions you want to interConnect. I would also create Subnets for each of the three availability zones (AZ) in each region.

People Mentioned

Mention Thumbnail
featured image - Decreasing Latency for High Frequency Crypto Arbitrage Trading
Petr De-Monderik HackerNoon profile picture

There are many code optimizations that can improve the speed of data processing in high frequency trading (HFT), such as switching from Python to C++, using specific libs. In tradFi HFT, it also includes switching to FPGA-based network adapters and writing the trading logic in Verilog.

In crypto trading, there is usually no option to install your own network hardware to a cloud server, even if you have a dedicated/bare metal instance. But there are other optimizations that can be done when it comes to network/telecom service providers who have their own autonomous systems (AS) that are connected to cloud providers. They do it in a way that can help you to get rid of milliseconds or even tens of milliseconds of latency between a cloud provider’s regions. That can allow you to send data from one remote crypto exchange to another crypto exchange faster than your competitors.

I don’t work for any network/telecom service provider, and have no active contracts with the network provider I use in my example below. This article is my initiative and hope it can be helpful for people in crypto trading and for junior network engineers.

I was interested to interconnect Binance and Bitmex locations to get market data from a remote exchange with predictable latency and then to process it by a local trading system. Amazon allows you to do inter-region VPC peering to allow connectivity between your own private IPs in different regions. When I first tried to connect two AWS regions with VPC peering I got some latency fluctuations. Although the volatility was not huge, it was shifting every couple of hours because AWS has multiple channels between regions and balances traffic over those links. However, in the end, my Tokyo VM, located in the same region as Binance, had a 49ms latency advantage for getting Bitmex market data!

AWS VPC Peering gave me 201.5ms latency:

--- 10.10.4.28 ping statistics ---

255 packets transmitted, 255 received, 0% packet loss, time 254274ms

rtt min/
avg
/max/mdev = 199.858/
201.542
/301.538/10.553 ms

So I decided to try an evaluation period for a line from Avelacom telecom provider to give me better latency and predictability. In order to interconnect VPCs, I started from scratch:

1. Created new VPCs in the regions you want to interconnect. You don’t want to use default network subnets in regions since the default CIDR used by AWS would be the same for every region.

AWS Console → Services → VPC → Your VPCs → Create VPC

For my example I would be using

10.10.4.0/24
CIDR for my VPC in eu-west-1 AWS region (Ireland, Dublin) and
10.10.5.0/24
CIDR in ap-northeast-1 AWS region (Japan, Tokyo)

2. I would also create Subnets for each of three availability zones (AZ) in each region:

10.10.5.0/26
for ap-northeast-1a,
10.10.5.64/26
for ap-northeast-1c and
10.10.5.128/26
for ap-northeast-1d.

For Dublin I would use

10.10.4.0/26
for eu-west-1a,
10.10.4.64/26
for eu-west-1b,
10.10.4.128/26
for eu-west-1c

AWS Console → Services → VPC → Subnets → Create Subnet

When creating a subnet, choose the VPC you created in the first step, choose AZ in which your subnet will reside and name your subnets somehow.

3. In order to let traffic from future ec2 instances be routed to the internet you have to create Internet Gateway in

AWS Console → Services → VPC → Internet gateways → Create Internet Gateway.

After you named and created new IGW you can immediately press Attach to VPC and choose fresh VPCs we created recently.

4. Add route to the Routing Table that was automatically created for your VPC in

AWS Console → Services → VPC → Route tables

Choose route table that is attached to the new VPC you created and press Edit Routes in Routes section. Add

0.0.0.0/0
route and target it to the IGW you just created to let your future VMs traffic to be routed to the internet. Now you ready to create some EC2 instances.

5. Create your instances in

AWS Console → Services → EC2 → Instances → Launch Instances

When creating ensure you are using our new VPCs and one of the subnets recently created. It might be handy to use Elastic IP to fix public IP address but you can just enable Auto-assignment of public IP if your connection to the exchange does not involve any IP whitelisting. You might want to experiment with placing your servers in different AZ according to the best latency you can get to the resource (Crypto Exchange) of interest in the region. But in this case I would just use 1a AZ in each region for the demo purposes.

6. Create Virtual Private Gateway

AWS Console → Services → VPC → Virtual private network (VPN) →Virtual Private Gateway → Create virtual private gateway

When creating use default ASN for each

7. Attach newly created Virtual Private Gateways to your VPCs.

8. Enable route propagation in:

AWS Console → Services → VPC →Route Tables (Select route table attached to your VPC)→ Route Propagation → Enable route propagation for created Virtual Private Gateway

9. In order to interconnect your VPGs with telecom provider, in my case it was Avelacom, that I provided with my Amazon Account ID and region names I wanted to interconnect beforehand:

Go to AWS Console → Services → Direct Connect → Virtual interfaces (Press on each) → Accept and choose which VPG you want to attach the line to.

10. You should inform provider after you accepted the requests in the Virtual interfaces section. After that it takes some time for provider to do configuration on their side.

Once you got confirmation from the provider everything is done, you should be able to test new latency between your virtual machines:

--- 10.10.4.28 ping statistics ---

3600 packets transmitted, 3600 received, 0% packet loss, time 3601277ms

rtt min/avg/max/mdev = 150.942/151.120/166.567/0.402 ms

Not bad, 151.1ms average latency! So how can we use that? Lets check if it gives you any benefit. Lets check public IP of the Bitmex in Ireland:

# ping bitmex.com

PING bitmex.com (18.66.171.111) 56(84) bytes of data.

64 bytes from server-18-66-171-111.dub56.r.cloudfront.net (18.66.171.111): icmp_seq=1 ttl=241 time=0.788 ms

Ok, looks like

18.66.171.111
is really close, just
0.788 ms
. All inline with Bitmex giving out in their support page:

“Our servers are located at AWS EU-West-1.”

Checking ping from Tokyo VM to the same address:

# ping 18.66.171.111

PING 18.66.171.111 (18.66.171.111) 56(84) bytes of data.

64 bytes from 18.66.171.111: icmp_seq=1 ttl=217 time=
201 ms

So we are getting 201ms to the same Bitmex address over the internet from Tokyo.

Using AWS VPC peering gave us ~201.5ms average latency so it would hardly give us any benefit. So lets check Avelacom’s test line.

I configured Wireguard tunnel between my two VMs in Dublin and Tokyo in order to let them have a directly connected route to each other. I would not go into describing the process of setting wireguard tunnels as a services here, as its out of the scope of this article. But following configuration was used on the endpoints:

Tokyo side:

[Interface]

Address = 192.168.20.1/32

PrivateKey = %specify_Tokyo_PrivateKey_here%

ListenPort = 51820

[Peer]

PublicKey = %specify_Dublin_PublicKey_here%

AllowedIPs = 192.168.20.2/32,18.66.171.111/32

Endpoint = 10.10.4.28:51820

PersistentKeepalive = 25

Dublin side:

[Interface]

Address = 192.168.20.2/32

PrivateKey = %specify_Dublin_PrivateKey_here%

ListenPort = 51820

[Peer]

PublicKey = %specify_Tokyo_PublicKey_here%

AllowedIPs = 192.168.20.1/32

Endpoint = 10.10.5.41:51820

PersistentKeepalive = 25

Adding Bitmex IP (

18.66.171.111/32
) to the AllowedIPs on the Tokyo side altering Tokyo side routing table, so traffic to
18.66.171.111
goes via Dublin now:

root@ip-10-10-5-41:/home/ubuntu# 
ip route get 18.66.171.111

18.66.171.111 dev vpn-dublin src 192.168.20.1 uid 0
 cache

In order to make Dublin VM to route my traffic outside I should enable IPv4 routing, disable RP filter and enable NAT for outgoing traffic via the default network interface:

sysctl -w net.ipv4.ip_forward=1

for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 1 > "$i"; done

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

So now my Tokyo machine should be able to reach Bitmex via Avelacom’s channel and my Dublin’s VM:

root@ip-10-10-5-41:/home/ubuntu# ping 18.66.171.111

PING 18.66.171.111 (18.66.171.111) 56(84) bytes of data.

64 bytes from 18.66.171.111: icmp_seq=1 ttl=240 time=
152 ms

64 bytes from 18.66.171.111: icmp_seq=2 ttl=240 time=
152 ms

Nice! Almost 25% lower latency!

Crypto trading newcomers might be confused that all the resources they use are hosted in the cloud. And it’s not obvious to them that they can use alternative but more optimal paths to reach their intended destinations. So, if your activity is sensitive to financial risks you might think of using more resilient solutions adopted from the traditional HFT world such as dedicated telecom lines. This would help to interconnect regions in a more efficient way. Latency will be improved not only because of alternative paths, but also because of connectivity optimization with the cloud provider termination points using AWS Direct Connect. Even if AWS uses similar paths, you will still get better latency. You can also be sure that all the data will arrive without packet losses because you can set guaranteed bandwidth on these lines.