When developers are tired of saying Y (yes) to things like User Agreements, installations or confirmations in the terminal they write programs such as yes. Here is a story of achieving high throughput of “y\n” in Node.
The inspiration for this story comes from the “How fast is yes in Go” article which in itself was inspired by this discussion. In our case we’ll also rely on pv to measure throughput and record peak results, replicating the default behavior of yes by outputting the character “y” followed by an appropriate end of line character. We will use 8.01 GiB/s from yes | pv > /dev/null
as our baseline to measure against. Hardware setup is available below.
Perhaps the first thing that comes to mind is using console.log
, our good old friend that hardly fails us, and using an ~infinite while loop to keep the application going... Also since console.log
automatically adds an EOL after each log a step is saved:
As you can see the performance is terrible, this is almost 10,000 times slower that our baseline. What if we ditch the console
abstraction and go straight to writing on stdout
? All we need now is to add EOL imported from OS and measure:
That’s relatively good but we can do a lot better. Let’s try to follow buffer our output based on the previous article, we’ll utilize the smallest page object size 4096
in order to gain some efficiency when writing from memory to the underlying file descriptor (fd):
For a short period we get some throughput and then nothing. process.stdout
is a stream under the hood and it writes to a file descriptor whenever it can push data through, otherwise it will buffer it back to memory until data can be pushed again. Unfortunately we are overwhelming the asynchronous stream (some exceptions apply) with our synchronous loop here, especially with that amount of data being buffered back into the stream, we can patch this up by writing a semi-async loop:
That’s much better! It puts us in a the GiB territory but I reckon we can push Node even further.
Note that the second parameter in _Buffer.alloc_
fills the entire buffer with the given string.
I mentioned before that process.stdout
is sort of an exotic stream, so why not pipe a stream to it? Streams in essence work by writing/reading data based on available throughput and buffering the rest for later. We can get clever with our yes.js program and only send what is needed.
We’ll start by creating a Readable stream that will push an even sized buffer as it is important to always fit the entire character set and not miss a break line:
The _read
method in our Stream will be called with a size of how much data to read or in our case produce. We can then create a new instance of this stream and pipe it to process.stdout
which will result in 3.25GiB/s of throughput, that’s not bad at all! But we are allocating a brand new buffer and filling it for every time the _read
method is called. Let’s cache our buffers and create a final version:
A final peak throughput of 4.67 GiB/s which is a whopping 50,000 times faster than where we started and better than half of our baseline. On the same machine Golang was capable of achieving 7.75 GiB/s with Mat Evans’ source.
As a side-note, console.log
isn’t a performant logging solution if you fully depend on it but has a whole bunch of built-in features that a simple logger won’t have.
For measuring peak throughput all the tests where run separately on the same remote machine that packed an Intel Xeon E3–1240 v3 (3.4 GHz) with 32GB of DDR3 memory running Ubuntu 17.0. On the application side Node.js version 8.3.0 and 7.10.0 and Golang 1.7.4 were used.
You can install and run the final code from npm. There are many great ways of showcasing a language/framework’s capabilities but yes is starting to become one of my personal favorites, it showcases raw, single-threaded throughput without sacrificing simplicity.