Raphael Deem

@raphael.deem

Fixing bugs and handling 186k requests/second using Python

Sanic is a Python3 framework built using the somewhat newly introduced coroutines, harnessing uvloop and based on Flask. However, it had an issue preventing it from utilizing multiple processes correctly. The previous code tried to spawn new servers naively, without explicitly inheriting the socket connection:

processes = []
for _ in range(workers):
process = Process(target=serve, kwargs=server_settings)
process.start()
...

This resulted in only one process actually handling the request. With the help of Guido Van Rossum and the creator of uvloop, Yury Selivanov (relevant link here) I was able to implement a solution by creating a socket in the parent process and passing it to the children processes:

sock = socket()
sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
sock.bind((server_settings['host'], server_settings['port']))
set_inheritable(sock.fileno(), True)
server_settings['sock'] = sock
server_settings['host'] = None
server_settings['port'] = None
processes = []
for _ in range(workers):
process = Process(target=serve, kwargs=server_settings)
process.daemon = True
process.start()
processes.append(process)

I verified locally that the changes had an impact on throughput (I was able to achieve double the number of requests per second using 4 processes), and that multiple processes where actually responding to the request (by responding with the process id). Finally, I provisioned a 20 core Digital Ocean droplet and benchmarked the server using wrk -t250 -c500 http://localhost:8000 using 20 worker processes:

from sanic.response import text
from sanic import Sanic
app = Sanic()
@app.route('/')
async def hello(request):
return text("OK")
if __name__ == '__main__':
app.run(workers=20)

and was able to achieve 186k requests/second on a maximum run. This would continue to scale with the number of CPUs. The change hasn’t been merged yet (EDIT: the change was merged, so you can do this from the master branch now), though it likely will be soon. In the meantime, you should be able to demonstrate it yourself by cloning the sanic repository, using the worker branch, and saving the above file to hello.py:

git clone https://github.com/channelcat/sanic
cd sanic
git checkout workers
pip3 install -r requirements.txt
python3 hello.py

Of course, you’ll need a 20 core DO droplet too. I only ran mine for 6 minutes and they cost less than $1 per hour. So if you’re interested in replicating, let me know how it goes! I was going to do it with 36 cores on AWS, but apparently my ~2 year old account is still “pending” for some reason. Anyway, thanks for reading! Also, thanks to Yury and the BDFL!

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.
To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!
Topics of interest

More Related Stories