It’s that time of the month again, I’ve been away for a while, chasing after doctors in the hospital after my mother’s emergency heart surgery. I’m thankful she’s recovering well. She’s the light of my life, and now that she’s safe. I can work on Nexus.js once more!
If you’re reading this but you don’t know what Nexus.js is, please read the introduction here:
Now that that’s out of the way…
Let’s see how it works.
First, you have an Acceptor. The Acceptor’s job is to bind an interface (or multiple interfaces) and listen to connections. When there’s a new connection, it creates a TCPSocket and emits a ‘connection’ event.
The ‘connection’ event is where you should handle your traffic, the second argument contains the endpoint information (incoming address and port).
The TCPSocket passed to your callback is a bidirectional I/O device, and just like any other I/O device, you can construct streams on top of it to handle input and output and manipulate the data.
So, let’s stress test it a little. How much memory would it consume for 1024 connections?
18 MB for 1000 connections? Nice. But then again, those are just connections. No data was sent to the server by the client.
So let’s try again with data. This time, each client will send one message and keep the line open until they all disconnect.
38.7 MB, now let’s try this last test with node.js, with the following code:
Node.js wins with memory, right?
Not at all, if you look closely at the connections, you’ll notice that they’re all handled sequentially, while nexus was grinding all of the cores in parallel to handle the load. Now multiply the number of cores by node’s memory usage to get the real memory node would consume if you try to scale this use-case using the cluster API (4 * 15 MB = 60 MB), and then it would not utilize logical CPUs optimally.
This is just a start. I’m already formulating plans for the HTTP — and HTTP2 — server API.