paint-brush
These GraphQL Directives Are Overkillby@wunderstef
357 reads
357 reads

These GraphQL Directives Are Overkill

by Stefan Avram February 24th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I was a big fan and advocate of GraphQL's `@defer` and `@stream` directives. I've added support for TypeScript Operations to WunderGraph, including support for Subscriptions through Async Generators. After implementing this feature and playing around with it, I realized that the idea of incrementally loading data can be done in a much simpler way.
featured image - These GraphQL Directives Are Overkill
Stefan Avram  HackerNoon profile picture


For a very long time, I was a big fan and advocate of GraphQL's @defer and @stream directives. Actually, I've implemented them in my own GraphQL server implementation in Go almost 3 years ago.


At that time, I was using a stream of JSON-Patch operations to continuously update the client.


Recently, I've added support for TypeScript Operations to WunderGraph, including support for Subscriptions through Async Generators.


After implementing this feature and playing around with it, I've realized that @defer and @stream are complete overkill, and the whole idea of incrementally loading data can be done in a much simpler way.


Before we can compare the two approaches, we need to understand how @defer and @stream work.

How the @defer and @stream Directives Work

I'm using the Example from the DeferStream RFC.


query {
  person(id: "cGVvcGxlOjE=") {
    ...HomeWorldFragment @defer(label: "homeWorldDefer")
    name
    films @stream(initialCount: 2, label: "filmsStream") {
      title
    }
  }
}
fragment HomeWorldFragment on Person {
  homeworld {
    name
  }
}


We're loading the person with the ID cGVvcGxlOjE=.


For this person, we're loading only the name and the first two films synchronously.


The HomeWorldFragment is loaded asynchronously, and all films beyond the first two are loaded asynchronously.


From the example RFC, we get back the following responses:

{
  "data": {
    "person": {
      "name": "Luke Skywalker",
      "films": [
        { "title": "A New Hope" },
        { "title": "The Empire Strikes Back" }
      ]
    }
  },
  "hasNext": true
}


The first response is as expected; the only notable thing is the hasNext field which indicates that there are more responses to come.


{
  "label": "homeWorldDefer",
  "path": ["person"],
  "data": {
    "homeworld": {
      "name": "Tatooine"
    }
  },
  "hasNext": true
}


The second response is the HomeWorldFragment which is loaded asynchronously.


The label field is used to identify the response. The path field is used to identify the location in the response where the data should be inserted.


The hasNext field indicates that there are still more responses to come.

{
  "label": "filmsStream",
  "path": ["person", "films", 2],
  "data": {
    "title": "Return of the Jedi"
  },
  "hasNext": false
}


The third response is the first film beyond the first two films. The path field is used to identify the location in the response where the data should be inserted. The hasNext field indicates that there are no more responses to come.


According to the RFC, this approach has the following benefits:


  • Make GraphQL a great choice for applications that demand responsiveness.


  • Enable interoperability between different GraphQL clients and servers without restricting implementation.


  • Enable a strong tooling ecosystem (including GraphiQL).


  • Provide concrete guidance to implementers.


  • Provide guidance to developers evaluating whether to adopt incremental delivery.


At the same time, the RFC also lists the following Caveats: “Type Generation Supporting - @defer can add complexity to type-generating clients. Separate types will need to be generated for the different deferred fragments.


These clients will need to use the label field to determine which fragments have been fulfilled to ensure the application is using the correct types.


Object Consistency - The GraphQL spec does not currently support object identification or consistency. It is currently possible for the same object to be returned in multiple places in a query.


If that object changes while the resolvers are running, the query could return inconsistent results. @defer/@stream does not increase the likelihood of this, as the server still attempts to resolve everything as fast as it can. The only difference is some results can be returned to the client sooner. This proposal does not attempt to address this issue.


Can @defer/@stream increase risk of a denial of service attack?


This is currently a risk in GraphQL servers that do not implement any kind of query limiting as arbitrarily complex queries can be sent.


Adding @defer may add some overhead as the server will now send parts of the query earlier than it would have without @defer, but it does not allow for any additional resolving that was not previously possible.”

Why the @defer and @stream Directives Are Overkill

First of all, I want to say that I love the idea of incrementally loading data, and I have a lot of respect for the people who came up with this idea and participated in this or any other RFC that has something to do with this topic.


So please, don't take this as a personal attack on anyone. I don't criticize your work; I just want to point out that there is a much simpler way to achieve the same goal.


So what follows now is my personal opinion, which is based on years of experience working with and implementing GraphQL as well as building WunderGraph. First of all, this RFC has been in the making for many years without getting merged into the GraphQL spec.


As many library and framework authors were quite excited about this feature, they implemented it in their own way. So there are now many implementations of @defer and @stream out there.


This is a nightmare because how should clients know which implementation the server is using?

Additionally, the RFC is quite complex and requires a lot of work to implement. All client and server implementations have to implement handling of @defer and @stream directives.


It's a lot of work to implement this feature in a reasonable way.


Let me give you an example from GraphQL Yoga. Again, not criticizing the work of the GraphQL Yoga team, just trying to point out the complexity of this feature.


import { createYoga, createSchema, Repeater } from 'graphql-yoga'
import { createServer } from 'node:http'
import { useDeferStream } from '@graphql-yoga/plugin-defer-stream'
 
const yoga = createYoga({
  schema: createSchema({
    typeDefs: /* GraphQL */ `
      type Query {
        alphabet: [String!]!
      }
    `,
    resolvers: {
      Query: {
        alphabet: () =>
          new Repeater<string>(async (push, stop) => {
            const values = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
            const publish = () => {
              const value = values.shift()
              console.log('publish', value)
 
              if (value) {
                push(value)
              }
 
              if (values.length === 0) {
                stop()
              }
            }
 
            let interval = setInterval(publish, 1000)
            publish()
 
            await stop.then(() => {
              console.log('cancel')
              clearInterval(interval)
            })
          })
      }
    }
  }),
  plugins: [useDeferStream()]
})
 
const server = createServer(yoga)
 
server.listen(4000, () => {
  console.info('Server is running on http://localhost:4000/graphql')
})


This server allows us to stream the alphabet. We set up a repeater that publishes a letter every second. Let's assume we actually have a data source that provides us with one letter every second.


If we go back to the RFC example, it allowed us to use the initialCount argument to specify how many items we want to receive in the first response.


First, this argument is missing in the Yoga implementation, but they could probably add it. But even if they added it, we'd still have the same problem.


How does the user of the API know how many items to request in the first response? If we don't have fine-grained control over the data flow, how can we implement this feature efficiently?


Imagine you're building a public GraphQL API, and someone asks for ridiculous initialCount values.


You now have to add extra logic to validate the initialCount argument and protect your server.

What if a user applies the @defer directive to fields that are not supposed to be deferred?


E.g., if you're loading a user profile from the database, we will always load the user id, name, and email address using a single database query.


If we now apply the @defer directive to the email address, what is supposed to happen?


Should we load the email address in a separate database query? Should we load the email address in the same database query but defer it? Or should we just ignore the @defer directive and return the email address in the first response?


What happens if an attacker intentionally sends queries with endless numbers of @defer directives?


Should we now rate limit the use of @defer directives?


Technically possible, but it just adds more complexity to the implementation.


Keep in mind that every single implementation of @defer and @stream has to implement all of this logic. Last but not least, and that's what I'm afraid of the most, we could just hide all of this complexity from the user and swallow the performance penalty in the framework.


This would mean that resolvers would not be aware of the @defer and @stream directives.


However, this would lead to very inefficient implementations. Load all the data from the database, then apply the @defer and @stream directives and then return the data to the client; what's the point?


Let's put an end to this rant and talk about a much simpler solution.


What problem are we trying to solve here?

What Problem Are We Solving With the @defer and @stream Directives in GraphQL?

We need a blob of data in the user interface, but loading it all at once would be too slow. This can have multiple reasons:


  • The data is too big to load all at once (e.g., a list of 1000 items)
  • We need to load the data from multiple sources, one of which is slow

How Does WunderGraph Solve This Problem?

WunderGraph compiles GraphQL Operations into JSON-RPC requests. E.g., a GraphQL Subscription is turned into a stream of newline delimited JSON objects.


Up until recently, you've had to implement Subscriptions in your GraphQL server and plug this server as a data source into WunderGraph, allowing you to expose the Subscription as a JSON-RPC stream.


As I've said above, we've recently added support to implement JSON-RPC Operations directly in WunderGraph using TypeScript, supporting Queries, Mutations, and Subscriptions.


Let's take a look at how we could implement the example from the RFC using a WunderGraph Subscription / JSON-RPC stream.


// src/operations/user.ts
import { createOperation, z } from '../../generated/wundergraph.factory';

export default createOperation.subscription({
	input: z.object({
		id: z.string(),
	}),
    response: z.object({
        name: z.string(),
        films: z.array(z.object({
            title: z.string(),
        })),
        homePlanet: z.object({
            name: z.string(),
        }).nullable(),
    }),
	handler: async function* (ctx) {
		try {
		    const userPromise = db.user.findUnique({
                where: {
                    id: ctx.input.id,
                },
            });
            const filmsPromise = db.film.findMany({
                where: {
                    id: {
                        in: user.filmIds,
                    },
                },
                limit: 2,
            });
            const homePlanetPromise = db.planet.findUnique({
              where: {
                id: user.homePlanetId,
              },
            });
            // we've defined three promises, but we don't want to wait for all of them
            const [user, films] = await Promise.all([userPromise, filmsPromise]);
            // we only wait for the user and the first two films to be loaded
            // Once these two promises are resolved, we yield the first chunk of data to the client
            yield {
                data: {
                    person: {
                        name: user.name,
                        homePlanet: null, // we don't have the home planet yet, so we set it to null
                        films,
                    },
                },
            };
            // Now we wait for the home planet to be loaded
            const homePlanet = await homePlanetPromise;
            // Once the home planet is loaded, we combine the data from the first two promises with the home planet
            // and yield the second chunk of data to the client
            yield {
                data: {
                    person: {
                        name: user.name,
                        homePlanet,
                        films,
                    },
                },
            };
            let skip = 2;
            const allFilms = Array.from(films);
            while (true) {
                // We now load the remaining films in chunks of 1
                const films = await db.film.findMany({
                    where: {
                        id: {
                            in: user.filmIds,
                        },
                    },
                    skip,
                    limit: 1,
                });
                if (films.length === 0) {
                    break;
                }
                allFilms.push(...films);
                // We append the additional films to the list of films we've already loaded
                // and yield the updated response to the client
                yield {
                      data: {
                          person: {
                              name: user.name,
                              homePlanet,
                              films: allFilms,
                          },
                      },
                  };
                  skip += 1;
            }
		} finally {
			console.log('Client disconnected');
		}
	},
});


The createOperation.subscription function allows us to implement a JSON-RPC Stream using an async generator function. Compared to an async function, an async generator function can yield multiple values.


So, instead of returning a single value, like an async function, we can yield multiple values.


Most importantly, we have exact control over the data flow.

We're not limited to implementing a resolver that returns a stream of values.


As we can yield multiple times, we can easily achieve the same result as the @defer and @stream directives.


On the client side, you can use the generated type-safe client to subscribe to the stream.


When there are no more yield statements in the generator function (when it returns), the stream will be closed.


Here's what the client code would look like:

import {useSubscription, withWunderGraph} from '../../components/generated/nextjs';

const Functions = () => {
    const {data} = useSubscription({
        operationName: 'user',
        input: {
            id: '1',
        },
    });
    return (
        <div>
            <h1>User</h1>
            <div>
                <div>name: {data.person.name}</div>
                {data.person.homePlanet && <div>homePlanet: {data.person.homePlanet}</div>}
                <div>films: {data.person.films.map((film) => film.title).join(', ')}</div>
            </div>
        </div>
    );
};

export default withWunderGraph(Functions);


The client code is quite simple, and it's type-safe.


This is because WunderGraph infers the response type from the operation definition.


We can define the response shape in the operation definition by creating a zod schema.


The generated client will use this schema to infer the response type. However, defining the response shape is optional.


If we don't define it, the client will infer the response type from the operation handler, so it's up to the developer to either define the shape themselves or infer it from the operation handler.

How Wundergraph Reduces the Amount of Data That Needs to Be Transferred

As you can see in the example above, we've successfully implemented the @defer and @stream directives with a simple TypeScript function.


So, from a developers' perspective, the problem seems to be solved.


However, there's still one problem left. It seems like with every yield statement, we're sending the entire response object to the client. This means that we're sending the same data over and over again, only to add a single field or append it to an array.


GraphQL Subscriptions suffer from the same problem.


If you're using Subscriptions in your GraphQL server, and only a single field changes between two updates, you're still sending the entire response object to the client.


WunderGraph solves this problem elegantly by using a technique called JSON Patch. Instead of reinventing the wheel and building our own implementation of incrementally loading data, we're using the JSON Patch standard to describe the changes between two updates.


JSON Patch (RFC 6902) is a standard for describing changes to a JSON document.


It's a format for expressing a sequence of operations to apply to a JSON document. Here's an example:


[
  { "op": "replace", "path": "/homePlanet", "value": { "name": "Tatooine" } },
  { "op": "add", "path": "/films/-", "value": { "title": "A New Hope" } },
  { "op": "add", "path": "/films/-", "value": { "title": "The Empire Strikes Back" } },
  { "op": "add", "path": "/films/-", "value": { "title": "Return of the Jedi" } }
]


The JSON Patch above describes the following changes:


  • Replace the homePlanet field with a new object


  • Add three new films to the films array


The WunderGraph Server calculates the list of JSON Patch operations for each update. If the JSON Patch is smaller than sending the entire response object, the WunderGraph Server will send the JSON Patch to the client.


In some cases, especially when the initial response is small, the JSON Patch will be larger than the entire response object. In this case, the WunderGraph Server will send the entire response object to the client instead of the JSON Patch.


All WunderGraph clients support handling JSON Patch operations out of the box, but it's opt-in, so a client can choose if they want to use JSON Patch or not.


Here's a curl command that shows how to subscribe to a Subscription Operation without using JSON Patch:


curl http://localhost:9991/operations/user


If you want to use JSON Patch, use the following command:


curl http://localhost:9991/operations/user?wg_json_patch


Additionally, you can also combine this with SSE (Server-Sent Events) by using the following command:


curl http://localhost:9991/operations/user?wg_json_patch&wg_sse

Conclusion

I think that in 90% of the cases, you don't need @defer and @stream at all. For those rare cases where you want to micro-optimize a page, WunderGraph gives you all the tools you need to efficiently load data incrementally.


More importantly, we're building on top of existing standards. As I said earlier, I was a big fan of the @defer and @stream directives.


However, I think that there's a much simpler solution to the problem without having to introduce new directives to the GraphQL spec and without having to implement a new protocol.


Streaming JSON Patch is a simple solution that works beyond just GraphQL. That said, WunderGraph also applies the same technique to GraphQL Subscriptions and Live Queries.


So, if you're using GraphQL Subscriptions in your application already, you can easily switch to WunderGraph and the generated client & server will transparently use JSON Patch to reduce the amount of data that needs to be transferred.


The @defer and @stream directives put control over the data flow in the hands of the API consumer, who not necessarily has the required knowledge to apply them in the most efficient way.


Micro-optimizations should be avoided in most cases, but if you really need them, control over the data flow should be in the hands of the API provider.

Try It Out for Yourself

If you'd like to try out this feature for yourself, check out the WunderGraph Mono Repo on GitHub and try out one of the examples.


Also published here