Last week **@**nateberkopec wrote “Is Ruby Too Slow For Web-Scale?” which discusses the old rumor that Rails is too slow to be Web-Scale. Guess what!? It is not. Nate goes on in details why and the whole Rails-Bubble Twittersphere follows along. I agree with the general assumption but in my opinion it falls short of a couple of very important points (e.g. the price you have to pay. Literally!). I’d like to use this post to clear up some of the fog and to bring the whole Rails ecosystem against me. ;-)
Nate writes his posts to get more visibilty which translates into more consulting contracts. So do I. We both make a living by pushing the speed limits of Rails applications. This confrontation is not personal but just to point out some flaws which are important to realize. I’m happy to buy him or any other person I mention in this post or any follow up tweet a drink any time!
At one point Nate says that a user will not feel the difference between a 1ms (fast) and a 100ms (slow) answer from a Rails webserver (meaning the time it takes Rails to render the HTML on the server which than still has to be delivered to the client). He even uses a study by Jakob Nielsen to make his point. Nate’s argument is wrong and misleading. It does not account for typical WebPerformance problems and what is more important: He misreads the work of Jakob Nielsen. Nielson’s study says that everything below 100 milliseconds is felt as “reacting instantaneously”. “1.0 second is about the limit for the user’s flow of thought to stay uninterrupted”. Because of that the Gold Standard for WebPerformance is everything below 1 second. Visit google.com anywhere in the US and it will fire up (most of the times) in less than 1 second. Which is one of the reasons it is so popular. You wouldn’t use Google if it took 5 seconds to load.
So if you want to play in that WebPerformance premium league than there is a huge difference between 100ms and 1ms. Companies hire people like Nate and me because of that difference!
It would take for ever to walk you through the implications of that 100 ms delay in the context of WebPerformance. If you really want to understand them I highly recommend reading High Performance Browser Networking by Ilya Grigorik who is the god of WebPerformance.
But if the webperformance of the given page is bad anyway (meaning everything above 2-3 seconds) than Nate’s argument becomes more valide. And once you hit the 5 second treshold it doesn’t matter at all.
Botton line: If you want a real fast webpage everything counts and 100ms on the server makes a great difference.
Nate writes
First of all: I disagree with those numbers. From my experience as a Ruby on Rails consultant the interaction with the database is the least of all problems. Of course every once in a while a company forgets to set an index or does some very crazy joins which take for ever but in 9 out of 10 cases the SQL database it not the biggest bottleneck.
Don’t misunderstand me: Every millisecond counts and there is always room for improvement.
My point is: People don’t care where they lose time. They use Rails as a one stop shop and they use the gems and everything in the Rails ecosystem. They use it because it is so easy to use. They don’t want to analyse which data storage solution might become a bottle neck.
Over the years Rails became very easy to use and it does have a great eco system but it fails performance wise. People don’t think about the costs of a gem. They just include it and fire up bundle.
At on point Nate talks about the not big enough benefit of rewriting a slow Rails application to a fast e.g. Phoenix application. He says: “So, congratulations: you rewrote your application (or chose your framework) to save $3,000/month.”
Here’s where he is wrong:
VC funded companies which burn money like there is no tomorrow are the exception. Most of us and most of my clients do have to save money where ever it is possible.
For many low cost or free services this becomes an even bigger problem. Please have a look at https://www.vutuv.de which is a business network (think LinkedIn but free, secure and open-source). Vutuv was written as a Ruby on Rails application in the beginning. But it was rewritten as a Phoenix/Elixir application because of WebPerformance and budget reasons.
While I’m ranting let’s throw in DHH into this discussion too. He added this tweet to the discussion:
Interesting numbers. But I’d like to ask everybody: What do these numbers mean? Nothing without knowledge of their server infrastructure and their code base. How much money does 37signals burn for their servers?
BTW: I’m not impressed with a 90th percentile of 180ms. The peaks are responsable for most headaches.
In my opinion Ruby on Rails is relatively slow. Comparing it with the Phoenix Framework which is the current new savior for many Rails developers it is 10 times slower. Does that matter? Is it bad? Most times not.
Knowing that alternatives like the Phoenix Framework are faster is as relevant has knowing that Assembler is faster than Rust or C. Of course Assembler is fast but it is so more painful to use! You only use it when speed and code size is paramount.
So Rails is most times slower and more expensive (server wise) but that doesn’t mean that it is a bad choice. My point is just to make everybody aware of this. I have seen many companies running into trouble because of too high Heroku bills.
Don’t use Rails just because Shopify and Basecamp do. Use it because you like the Framework. Because you feel that is gives you the best development environment. Use it when you understand the implication of it to your budget. Please take the time to read “Phoenix is better but Rails is more popular”.
And please contact me by email ([email protected]) or DM on Twitter (@wintermeyer) in case you run into any performance trouble or need Ruby on Rails or Phoenix consulting/training.