Playing around this method, I noticed that if you do not give it a starting value, it starts iterating from the second index of the array.
For the first iteration, a
(the accumulator) is assigned the first index
of the array
: 1
; b
(the current value) is assigned the second index
: 2
. Thus, there are only 3 iterations for this example.
This makes sense, first you add 1+2
then 3+3
and finally 6+4
.
Now, if you give a starting value to the reducer a
the iteration will start with the first key, thus it will iterate 4 times:
This stirred my curiosity: I wondered if the the implementation that was doing less iteration was actually faster. I have no experience in native code performance testing. But I decided to use a tool that I saw being used: JsPerf.
On JsPerf the test with no start value runs about 10% faster (with big variation depending on the browser). These results are for small arrays.
Testing the same code with a larger array, say 10k items does not show a significant difference. There a few operation par second difference, not a lot. This is most likely due to the dispatching (starting) time that is different for both implementation. This difference being less significant after many iterations. Purely hypothetically if the dispatching time too 2 nanoseconds and each iteration 4 nanoseconds, for 4 iterations 2 nanoseconds would represent 2 / 4 * 4 + 2 ~11.11%. But for 100 iterations 2 / 4 * 100 = 0.5 %
As you can see, this big difference if valid only for small arrays. That could be interesting for a array.map(i => i.reduce())
where you know that each reduction will happen on a small array.
But wait, JsPerf test on the browser, what about node.js? Surprise, surprise… It’s the other way around. I wrote a little script to test that:
I tested different version of node.js. I calculated the increase time in percent between the implementation with no start value, and the other with a start value. A negative number indicates that the variant with a start value was actually faster. I have given you the fastest and the slowest test. I have done many other tests that range in between those number.
We can see that node is actually faster with a start value on small arrays, but that on big arrays, the implementation with no start value is faster, with Node 8 where the implementation with a start value performed many time faster than the other implementation.
We speak here about mili and nano seconds… For example the big array test that performed the best on Node 8 was actually about 0.076ms
faster…
These numbers are small, I guess it does not make sense to bother, unless you get tons of requests. But this was a fun experiment. Let me know what you think!