Data Loaders in a GraphQL Serverby@uroshcs
1,651 reads
1,651 reads

Data Loaders in a GraphQL Server

by Uroš AnđelićDecember 28th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Using data loaders is the best approach for optimizing a GraphQL server. Queries and mutations can be handled more easily, and subscriptions require some additional work. Each HTTP request requires a separate set of data loader instances. Each published event requires a new set of data loader instances in order to not have stale data.
featured image - Data Loaders in a GraphQL Server
Uroš Anđelić HackerNoon profile picture

What is common between GraphQL and OOP design patterns? They seem pretty cool at first, but then you realise it’s an overkill most of the time. Other times - they will be a lifesaver.

In order for GraphQL to be a lifesaver you really need to understand how to use it.

Let’s take an example of a GraphQL query that fetches N = 10 posts and the author for each post:

posts(limit: 10) {
  author {

In a trivial implementation for this query, the order of operations would be something like:

  • Fetch N posts from DB
  • For each post:
    • Resolve id →
    • Resolve title → post.title
    • Resolve creator → fetch the author from DB by id

This would mean 1 query for N posts and N queries for N authors → ergo, N+1 problem. Ideally, instead of N findById(id) queries, there would be just one findByIds(ids) query Then, each resolver could take the author that it needs, by id. Batching queries like this is done differently by different programming languages, and this pattern usually has names like data loader or batch loader.

Each type of data requires a separate type of data loader. One data loader would be required for resolving the author by id. A different one should be used to resolve the number of comments on a post. A third one would be required for resolving its tags. Resolving the number of likes for each comment would require a fourth type of a data loader and so on. All of those data loaders are usually grouped together for easier use.

Other than batching I/O operations, data loaders can and usually do cache the fetched data. This way it can be retrieved from cache if it is needed again, in the lifetime of the data loader instance. Optionally, the cache can be turned off, in which case the data loader will perform only batching.

GraphQL has 3 types of operations: query, mutation and subscription. Queries and mutations are regular HTTP requests and subscriptions are long-lived connections, usually implemented with web sockets.

Data loaders in GraphQL queries and mutations

Data loaders are not meant to be shared by multiple HTTP requests. That’s why when a client makes an HTTP request to a GraphQL server, instances of all data loaders (bag of loaders) are created, and that bag object is attached to the request’s context object. Then, each resolver can pull a loader it needs from the context and use it. Once the response is sent, the context object is garbage collected, along with all the data loaders and the cached values it contains.

Data loaders for a single HTTP request

Data loaders in subscriptions

Using data loaders in subscriptions is more complicated than queries and mutations. The connection between the client (subscriber) and the server (publisher) is made once and it stays open. There is one context object for each connection, which exists as long as the connection is open. But data loaders should not be shared between different published events because two events can be separated by any amount of time. Also, storing so much data in the app memory for longer periods of time is almost never a good idea.

1. Single event, all subscribers

For a single event, data loaders can be shared between all subscribers, but not for all resolvers. Some resolvers have auth restrictions (e.g. admin-specific fields) and some depend on the auth user (e.g. unread messages count). Those resolvers are sometimes impossible to share between different users.

The way to achieve shared data loaders is for the event payload to contain a unique id. Then, each subscriber can use this id to get a specific bag of loaders from a hash map of bags. This bag would have to contain only the common, shared loaders that can be used for every user.

Since it is very hard to know when this one event was resolved for all subscribers, the shared bag of data loaders should have a TTL, after which it should be automatically cleared. This time could be 0.5 s, 1 s or 10 s. It all depends on the balance between waiting enough for all subscribers to be resolved and not waiting too much in order not to waste memory.

Data loaders for a single event and all subscribers

2. Single event, single subscriber

Data loaders can be shared for a single event and a single subscriber. This can be done without too much trouble. Each time an event is published, a fresh bag of loaders is set on each subscriber’s context object. When the client/subscriber gets the resolved real-time data, the connection stays open, the context object remains in memory, and so does the bag of loaders and all of the data it cached internally. There are different ways to solve this and clear the cached data.

The simplest way to free the memory is to disable the cache option for the data loaders. The queries will be batched, but as soon as the data is distributed to the resolvers, it is cleared from the data loader. The main downside is a potential loss of performance. If the same data is requested again in the lifetime of the data loader instance it would have to be refetched again by the data loader.

Another option is available if the server library provides a hook when data is pushed to the client subscriber. That hook can be used to remove or clear the bag of loaders. This is the best option because it can clear the bag of loaders exactly when they are not needed anymore.

The last option is the approach with a TTL on the bag of loaders. Again, achieving the right balance of the TTL is the key.

Data loaders for a single event and a single subscriber

Lazy instantiation

Creating all data loaders on each request/event wastes more memory the more data loaders there are, because not all loaders are required on every request. Instead, loaders should be lazily instantiated only when they’re needed. The bag of loaders starts empty. As loaders are requested by the resolvers, they are created and saved into the bag. Each loader has its own name, by which it is keyed in a hash map. By the end of the request, the bag of loaders contains only the loaders that were needed for that request. When the response is sent, all loaders are garbage collected.

Final thoughts

For a proof of concept or a small app, having the N+1 problem in your server could be a non-issue. If the performance or the load of a GraphQL server is at all a concern, data loaders are the best approach for optimization. For subscriptions, both approaches listed here are valid, but the first approach (single event, all subscribers) gives the maximum possible performance.