I always liked the emotion of 1x1 online voting duels like Big Brother Brasil and The Voice Battle. The way how it reaches millions of people is incredible. So, I decided to do a simple poll application able to deal with 1 day of voting, in a large country like Brazil and USA. With a challenge: I wanted do that, from scratch, in 2 hours, with some cents.
As goal, I imagined a country of 300 million people, where 70% of population will vote. That is 210 million votes in one day, 8.75 million in one hour and approximately 2500 req/sec.
To do that, I chose the simple and powerful combination of Golang and Redis (with AOF persistence). Running in a simple Digital Ocean droplet with Ubuntu, without any SO fine tunning. I didn’t worry about auth, crawlers and rate limiting in this project.
For this goal, I created a project with 3 files: docker-compose.yml, Dockerfile and main.go. (https://github.com/danhenriquesc/go-poll)
The docker-compose.yml and Dockerfile files basically builds up a redis server and a Golang 1.11 API. Running on localhost:8000. I just need to run a simple command: docker-compose up
The main.go is a file of 96 lines that creates an API with 2 endpoints:
Two Golang packages were used in this API:
After creating that project, I decided to check how many requests it could handle, with wrk tool. So I tested in a cheap and realistic way, with Digital Ocean droplets.
I choosed 3 Standard Droplets options to test:
The three running Ubuntu 16.04.4 x64, without any fine tuning.
You can see the benchmark sets below:
$5/month droplet
Execution 1
root@ubuntu-s-1vcpu-1gb-nyc1–01:~# wrk -t 1000 -c 10000 -d 1m -s ./post.lua http://68.183.137.138:8000/vote/2Running 1m test @ http://68.183.137.138:8000/vote/21000 threads and 10000 connectionsThread Stats Avg Stdev Max +/- StdevLatency 607.99ms 157.48ms 1.99s 82.22%Req/Sec 24.63 22.40 292.00 75.02%345301 requests in 1.00m, 52.72MB readRequests/sec: 5755.02Transfer/sec: 0.88MB
Execution 2
root@ubuntu-s-1vcpu-1gb-nyc1–01:~# wrk -t 1000 -c 20000 -d 1m -s ./post.lua http://68.183.137.138:8000/vote/2Running 1m test @ http://68.183.137.138:8000/vote/21000 threads and 20000 connectionsThread Stats Avg Stdev Max +/- StdevLatency 860.65ms 296.29ms 2.00s 65.63%Req/Sec 39.72 45.10 686.00 83.99%358947 requests in 1.00m, 54.94MB readRequests/sec: 5982.46Transfer/sec: 0.91MB
$15/month droplet
Execution 1
root@ubuntu-s-3vcpu-1gb-nyc1-01:~# wrk -t 7500 -c 10000 -d 1m -s ./post.lua http://204.48.20.138:8000/vote/2Running 1m test @ http://204.48.20.138:8000/vote/27500 threads and 10000 connectionsThread Stats Avg Stdev Max +/- StdevLatency 631.39ms 107.29ms 798.00ms 91.50%Req/Sec 1.25 2.28 272.00 97.79%735058 requests in 1.01m, 112.16MB readRequests/sec: 12091.66Transfer/sec: 1.85MB
Execution 2
root@ubuntu-s-3vcpu-1gb-nyc1-01:~# wrk -t 1000 -c 13000 -d 1m -s ./post.lua http://204.48.20.138:8000/vote/1Running 1m test @ http://204.48.20.138:8000/vote/11000 threads and 13000 connectionsThread Stats Avg Stdev Max +/- StdevLatency 1.08s 89.31ms 1.91s 80.95%Req/Sec 22.67 29.20 650.00 87.31%714698 requests in 1.00m, 113.13MB readRequests/sec: 11884.94Transfer/sec: 1.88MB
Execution 3
root@ubuntu-s-3vcpu-1gb-nyc1-01:~# wrk -t 1000 -c 15000 -d 1m -s ./post.lua http://204.48.20.138:8000/vote/1Running 1m test @ http://204.48.20.138:8000/vote/11000 threads and 15000 connectionsThread Stats Avg Stdev Max +/- StdevLatency 1.32s 128.68ms 1.99s 77.60%Req/Sec 24.45 33.76 1.09k 85.97%671964 requests in 1.00m, 102.71MB readRequests/sec: 11199.40Transfer/sec: 1.71MB
$80/month droplet
Execution 1
root@ubuntu-s-6vcpu-16gb-nyc1-01:~# wrk -t 1000 -c 70000 -d 1m -s ./post.lua http://204.48.20.138:8000/vote/1Running 1m test @ http://204.48.20.138:8000/vote/11000 threads and 70000 connectionsThread Stats Avg Stdev Max +/- StdevLatency 902.54ms 103.71ms 2.00s 78.21%Req/Sec 79.04 71.04 3.75k 84.72%4117831 requests in 1.01m, 637.73MB readRequests/sec: 68630.52Transfer/sec: 10.54MB
Execution 2
root@ubuntu-s-6vcpu-16gb-nyc1-01:~# wrk -t 1000 -c 30000 -d 1m -s ./post.lua http://204.48.20.138:8000/vote/1Running 1m test @ http://204.48.20.138:8000/vote/11000 threads and 30000 connectionsThread Stats Avg Stdev Max +/- StdevLatency 942.14ms 127.26ms 2.00s 85.55%Req/Sec 47.32 55.14 1.33k 88.25%1830829 requests in 1.00m, 281.99MB readRequests/sec: 30513.82Transfer/sec: 4.68MB
Execution 3
root@ubuntu-s-6vcpu-16gb-nyc1-01:~# wrk -t 1000 -c 20000 -d 1m -s ./post.lua http://204.48.20.138:8000/vote/2Running 1m test @ http://204.48.20.138:8000/vote/21000 threads and 20000 connectionsThread Stats Avg Stdev Max +/- StdevLatency 656.35ms 89.49ms 1.61s 84.31%Req/Sec 40.59 41.40 1.13k 87.53%1828994 requests in 1.00m, 292.16MB readRequests/sec: 30483.21Transfer/sec: 4.85MB
$5/month droplet
Firstly, I tested with the cheapest droplet of Digital Ocean, that costs $5/month. As I want to deal with 1 day of voting, so, I had the 1-day cost of approximately $0.16.
In this droplet, we had something about 5800 req/sec, that can handle:
In this droplet, I can easily achieve the initial goal of handle 210 million votes in one day, with $0.16.
$15/month droplet
So, with the mission completed, why not stress a more powerful droplet? So, I changed for the $15/month droplet, that costs $0.02/hr and $0.53/day.
In this droplet, we got approximately 12,000 req/sec, basically twice of $5/mo droplet. It wasn’t good because paying three times more didn’t give me three times the performance.
$80/month droplet
Jumping to a new level, I decided to try the $40/mo server. But, because of a missclick, I selected the $80/mo server, and there I went: A big surprise. This droplet costs of $0.12/hr and $2.85/day, and reached about 68,000 req/sec.
With this ratio, we can handle:
With this benchmark, we can get some interesting insights:
With that number, we can realize that some applications appear to be harder and more expensive than in fact are. Even though some business rules were despised, the cost would remain low for any small business.
Besides that, other architectures like serverless functions in AWS or GCP can provide better performance and scalability with low cost. It gives a lot of opportunities for new business have high-scalable applications.
The possibilities are there, just try it!