Ruby on Rails is a tremendous framework when you want development speed for your project or startup. It’s useful right out of the box and comes with a plethora of behind-the-scenes magic to make your life easier. However, it’s not considered the fastest framework out there in terms of performance. You will find examples of individuals and companies drifting away from Rails in favour of something else. Despite this, there are many companies out there who have succeeded in scaling Rails and found success — just take a look at Airbnb, Github, Gitlab & Shopify.
So before you jump ship, you should consider keeping performance at the forefront of your mind when working with Rails, and you can succeed too. This article aims to list the most important tips & tricks I’ve learned over the years to make Rails run at blazing fast speeds and scale to millions of requests per minute.
First off, there are some general tips to implement in your Rails project to set yourself up for success.
You can’t improve performance if you don’t measure it first, making it necessary to have the right metrics tracked and monitored. You should be tracking load times, request times, and database query timings amongst other things. Personally, I’ve found New Relic to be one of the best APM tools for Rails but it is a tad bit on the pricey side. A more affordable alternative is SkyLight’s free trial.
I know personally how horrifying updating the Rails version for your project can be if you left it stale for too long, and I sympathize with anyone who had to go through this ordeal. So do yourself a favour and try to keep in sync with the newer versions of Ruby and Rails. This will help you skip the pain of jumping multiple versions and ensures that you have all the newer performance enhancements.
Pareto Principle (the 80/20 rule)
This is a well-known rule when it comes to developing software — The Pareto Principle. It’s also known as the ‘law of vital few’ and states that for most events, 80% of the effects are generated from 20% of causes. The idea behind this is to not waste time on micro-optimizations while you have bigger issues that need resolution. You won’t accomplish much by shaving off milliseconds in serialization if your database queries are extremely slow. So pick your battles carefully.
Rails is backed by an amazing community behind it and a library of gems that can easily help you accomplish complicated tasks. But it’s easy to get carried away adding gems to your project, causing bloat. Be careful while selecting what gems to add to your project and try keeping your dependencies lean.
Whenever you need to do anything complicated or long-running, consider throwing it into a background worker — sending an email, push notifications, uploading pictures and the like. Minimizing work in the main thread will ensure a snappy response for users. The good thing for us is that Rails has multiple options to achieve this easily — Sidekiq, Rescue or ActiveJob.
If you have an API in your project you are most likely using the ActiveModelSerializers gem for serializing your data. I would strongly suggest switching over to the fast_jsonapi by Netflix. This gem is a whopping 25 times faster than the default ActiveModelSerializers and I can personally vouch for that from experience.
Sometimes if you have a lot of static data or you can’t make things faster, another alternative is to use a cache. Rails makes this incredibly easy to do right out of the box. Here is an example of how easy it is to do a cache with an expiry time:
Rails.cache.fetch("categories", expires_in: 5.minutes) do <br> Categories.all.map(&:name)<br>end
ActiveRecord is the magical ORM (one of the best ever to exist) offered by Rails. It’s easy to get caught up in the ease of use without understanding the details which can lead you to bottlenecks in the future.
This is something that goes unnoticed or unimplemented by a lot of developers as they rush to do everything in code or maybe they are intimidated by writing raw SQL. But whenever possible or convenient, you should use thy database. Processing and sorting data structures in Ruby can chew up CPU time, while your database can do this without breaking a sweat.
Don’t just get enamoured by the magic of active record without worrying about what’s happening behind the scenes. In order for you to sweep out performance bottlenecks, you need to look at the actual queries that get triggered and understand them. In the development environment, Rails will print out all the queries getting run which allows you to notice any unnecessary queries. Also, here are a few tricks that will help you dig in a little deeper:
# enable slow query logs in Rails
ActiveRecord::Base.logger = Logger.new(STDOUT)
# Get rails to show you the sql that will run
User.where(id: 1).to_sql
=> "SELECT \"users\".* FROM \"users\" WHERE \"users\".\"id\" = 1"
# Get rails to explain the sql query for more detail
User.where(id: 1).explain
# The explain will tell you which index is being used, and what work happens in the back.
=> EXPLAIN for: SELECT "users".* FROM "users" WHERE "users"."id" = $1 [["id", 1]]
QUERY PLAN
--------------------------------------------------------------------------
Index Scan using users_pkey on users (cost=0.14..8.16 rows=1 width=962)
Index Cond: (id = '1'::bigint)
(2 rows)
Don’t worry, I am not about to suggest you write everything in raw SQL queries. But learning the basics of SQL and database design will help you better understand what’s happening under the covers and allow you to optimize the queries as needed. It’s a valuable skill to have as a software developer and will go a long way in your career.
One way to make your queries more efficient is to only select what you really need. Instead of doing a
SELECT *
specify which columns you need the database to retrieve. By default ActiveRecord selects everything but you can leverage select or pluck to fix this problem.
# Benchmark both methods because in some cases pluck
# is faster than a select & map.
Category.select(:type)
Category.pluck(:type)
This is a classic problem. If you are loading a Blog from the database and then try to find all the comments for that blog by looping through the records, you are forcing Rails to run a query for each of the comments. This can be eliminated by preloading the comments Blog.includes(:comments) and helps avoid the N+1 query problem.
Pro Tip: take a look at Bullet gem to help you find any N+1 query problems.
When working with bigger teams with multiple developers on a project, sometimes you will notice every person touching the codebase might be adding queries throughout the code path. More often than not, these queries can be combined into fewer queries at the top of the code path. This ensures there are no duplicated queries and allows the database to strategically do the heavy lifting.
This is a database best practice that shouldn’t be ignored. If you don’t index properly you will negatively affect database performance and cause unnecessary table scans. When building a new feature think ahead to what will be queried in the project and attempt to add the correct indexes. If you have an existing project you can always use Active Record Doctor or LolDBA to sniff out missing indexes.
When you are running at scale, and you have millions of records in your tables the normal way of running migrations in Rails will, unfortunately, fail you. They will either error out or lock tables for extended periods of time, taking your site down. Having dealt with this in the past there are two tools that will solve this pain point for you:
Gh-ost: This is a trigger-less online schema migration solution for MySQL and the only tool that worked for me at scale.
LHM: This will migrate your tables while they are still online without locking your table.
If there is an endpoint where you desperately need performance, you can consider combining all the Active Record queries throughout your functionality to a single massive SQL query at the top to pull out any records you need to fulfill the use case.
Disclaimer: Yes, it’s hard to maintain a massive raw SQL query, and it’s also less readable. But I did mention this is only in case of desperation.
When deploying a Rails project there are a few things to remember regarding the underlying infrastructure and architecture.
With cloud infrastructure having come such a long way, gone are the days of having to build bare metal servers in order to scale your application. When deciding on your underlying architecture for a Rails project you should utilize a scalable cloud-based system. As an example, you can use AWS Fargate or Kubernetes to automatically scale your dockerized Rails application as needed.
Brotli is another compression algorithm based on gzip with multiple improvements and a better compression ratio. It is now supported with most web servers and adding it is an easy way to optimize the compression speed. And well, who doesn’t want free web performance improvement. (Reference: Brotli vs Gzip)
Rails is notorious for using a lot of memory, especially if you have multiple puma workers running. So don’t let your application go thirsty, and give it some juice from the start. You can utilize memory-optimized instances on your cloud provider to set you up for success. And don’t forget to keep an eye on the swap usage on the server.
Since Rails 5 the webserver for Rails has been switched to Puma, and by default, it will only run one worker. As soon as you set up your Rails deployment make sure to increase the number of workers to the available cores on your machine or something more reasonable.
If you are utilizing a reverse proxy like Nginx make sure you have the HTTP/2 option turned on to give you all the performance advantages you get from it.
We are almost at the finish line — this section is all about some extra tips for performance.
Rails magic comes with a price. A few of these methods consume a lot of resources and it would be better to skip using them. As an example, the find_by() and find_all_by() are kind of slow because they need to run through method_missing and parse the name against the list of columns in the DB.
As you scale you are bound to run into malicious attempts on various endpoints that can’t be cached and will cause expensive operations to chew up resources. To get around this, I would recommend adding a gem like rack-attack to implement throttling on endpoints like login, reset password or sign up.
As the data set size increases we need to heed the impact of our code in terms of time complexity. In a scenario where lots of data needs to be processed or looped over, consider using a hash instead of defaulting to arrays.
All your static assets should always be fronted by a CDN and make sure the cache policies are reasonable. If you want granular control over the cache invalidation, you can look into using e-tags and Cache-Control.
In extreme cases, you can consider some advanced optimizations. I would argue if you’ve reached this point it might be time to consider other languages for certain parts of your project that are computationally intensive, so I will keep this section brief.
The main implementation of Ruby is written in C, which allows you to rewrite the slow part of your code in C as an alternative — e.g. if you are doing crypto algorithms for generating certificates. If you aren’t so excited about the idea of dwelling into C code, you can leverage third-party gems written in C by the Rails community.
With an increasing background workload, you can switch your background workers to a more performant language. By utilizing queues like Redis or Amazon SQS, the workers can be decoupled into their own microservice. Checkout this Sidekiq compatible go-workers library or a version of sidekick written with Crystal as two options.
There are a few implementations of Ruby aiming to increase the performance. If you are interested in these variations, take a quick look at Truffle-Ruby or JRuby.
If we look at companies using Rails we see that they were able to utilize the amazing development speed of Rails to put their customers first, and also managed to improve performance as they scaled. In this article, we talked about multiple tips & tricks for increasing performance. I hope you found this guide helpful. Till next time.
You’re in luck, me too! If you want to chat about cutting-edge technology, entrepreneurship or the perils of being a startup founder, find me on Twitter or on LinkedIn.