paint-brush
Fixing bugs and handling 186k requests/second using Pythonby@raphael.deem
3,439 reads
3,439 reads

Fixing bugs and handling 186k requests/second using Python

by Raphael DeemJanuary 8th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<a href="https://github.com/channelcat/sanic" target="_blank">Sanic</a> is a <a href="https://hackernoon.com/tagged/python3" target="_blank">Python3</a> <a href="https://hackernoon.com/tagged/framework" target="_blank">framework</a> built using the somewhat newly introduced <a href="https://docs.python.org/3/library/asyncio-task.html" target="_blank">coroutines</a>, harnessing uvloop and based on Flask. However, it had an issue preventing it from utilizing multiple processes correctly. The previous code tried to spawn new servers naively, without explicitly inheriting the socket connection:

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Fixing bugs and handling 186k requests/second using Python
Raphael Deem HackerNoon profile picture

Sanic is a Python3 framework built using the somewhat newly introduced coroutines, harnessing uvloop and based on Flask. However, it had an issue preventing it from utilizing multiple processes correctly. The previous code tried to spawn new servers naively, without explicitly inheriting the socket connection:





processes = []for _ in range(workers):process = Process(target=serve, kwargs=server_settings)process.start()...

This resulted in only one process actually handling the request. With the help of Guido Van Rossum and the creator of uvloop, Yury Selivanov (relevant link here) I was able to implement a solution by creating a socket in the parent process and passing it to the children processes:













sock = socket()sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)sock.bind((server_settings['host'], server_settings['port']))set_inheritable(sock.fileno(), True)server_settings['sock'] = sockserver_settings['host'] = Noneserver_settings['port'] = Noneprocesses = []for _ in range(workers):process = Process(target=serve, kwargs=server_settings)process.daemon = Trueprocess.start()processes.append(process)

I verified locally that the changes had an impact on throughput (I was able to achieve double the number of requests per second using 4 processes), and that multiple processes where actually responding to the request (by responding with the process id). Finally, I provisioned a 20 core Digital Ocean droplet and benchmarked the server using wrk -t250 -c500 [http://localhost:8000](http://localhost:8000) using 20 worker processes:


from sanic.response import textfrom sanic import Sanic

app = Sanic()



@app.route('/')async def hello(request):return text("OK")


if __name__ == '__main__':app.run(workers=20)

and was able to achieve 186k requests/second on a maximum run. This would continue to scale with the number of CPUs. The change hasn’t been merged yet (EDIT: the change was merged, so you can do this from the master branch now), though it likely will be soon. In the meantime, you should be able to demonstrate it yourself by cloning the sanic repository, using the worker branch, and saving the above file to hello.py:





git clone https://github.com/channelcat/saniccd sanicgit checkout workerspip3 install -r requirements.txtpython3 hello.py

Of course, you’ll need a 20 core DO droplet too. I only ran mine for 6 minutes and they cost less than $1 per hour. So if you’re interested in replicating, let me know how it goes! I was going to do it with 36 cores on AWS, but apparently my ~2 year old account is still “pending” for some reason. Anyway, thanks for reading! Also, thanks to Yury and the BDFL!

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!