When I wrote Meatier a little over a year ago, I was pretty early in adopting GraphQL to replace my REST endpoints, my ORM, and my imperative client-side data requests. Shortly after, when I was hired to build an open-source realtime app from the ground up, I jumped at the chance to use GraphQL both on the server and as the basis for my own client cache. A year later, I learned that building a GraphQL app for production is a lot different than one of those demo apps you see on GitHub. Go figure. After all the mistakes I made, here are my lessons learned.
GraphQL is flexible. When starting out, you often wonder if you’re doing it the “right” way. Should GraphQL talk to an ORM like mongoose, or does it go straight to the database? Do you create a library of custom scalars for passwords, paragraphs, and titles, or do you use external validation? Do you favor many, smaller mutations (eg updateName, updateTitle) or larger, more powerful ones (eg updateUserDoc)? Should you use it for Auth? (Yes) How about as an intermediary for an external API? (Yes!) Can it do subscriptions yet? (Kinda). Let’s dig in.
Yawn fest. But seriously, keeping your GraphQL schema modular is critical. I keep a discrete folder for each DB entity (Post, Comment, etc.) and inside is a discrete file for every operation (PostQuery, PostMutation, PostSchema). Explaining folder structure in words sucks. Check it out here. Now on to the interesting stuff…
In production, you use a driver to connect to your database. All demos import their DB driver at the top of file. This is crap for one major reason: it isn’t lazy! When you start your server, your GraphQL endpoint is going to resolve your GraphQL schema file, which in turn will resolve your db driver and start that connection. By eagerly instantiating the DB connection, you increase the time it takes to start up your server. More importantly, your connection is now stateful so any webpack magic that uses your schema (think creating a webpack bundle for server-side rendering) will need to manage that connection and end it when necessary. Imagine building a universal app that concurrently builds the client and server bundles & then maybe starts up the server. It’s like asking the last person out to close the door, but you can’t see or hear anyone. Spaghetti code becomes inevitable. To avoid the headache, I only establish the connection when necessary:
// getDBDriver.jslet driver;export default function getDBDriver() {if (!driver) driver = startDriver();return driver;};
Now, instead of importing the driver, I import my wrapper & just call that in the resolve method:
// in userSchema.jsresolve(source, {id}) {const db = getDBDriver();return db.get(id);}
Now my imports don’t have side effects. Yay closures!
In every GraphQL demo ever, the resolve function does nothing but resolves. Must be nice. In production, you have attackers calling your GraphQL endpoints trying to query documents that don’t belong to them. Even if someone owns the document, they may try to pass in values that shouldn’t be allowed (eg userCredits = $1000000). So how can you protect yourself while keeping your code readable? My answer is something I call the Auth/Validation/Resolution pattern. To describe it, let’s assume we have an app with a form allowing a person to change their team name:
async resolve(source, {updatedTeam}, {authToken}) {const db = getDBDriver();
// AUTHrequireUserOnTeam(authToken, updatedTeam.id);
// VALIDATIONconst validate = makeTeamSchema();const {errors, data: {id, name}} = validate(updatedTeam);handleSchemaErrors(errors);
// RESOLUTIONreturn await db.table('Team').get(id).update({name});}
In 5 lines of code, I’ve established authentication and authorization. I’ve validated and normalized the arguments, and I’ve given back the result. Let’s break it down.
In an entire enterprise SaaS, I found that each mutation only requires 1 of about 5 unique auth checks. (Are they signed in? Are they editing something that belongs to them or their team? Do they have an active websocket connection?). These are inexpensive checks to make sure the user can do what they want to do. Ideally, they’re synchronous. To make them synchronous, I sometimes cheat & sneak in extra info in the primary key. For example, if a user has a to-do item, and that to-do item can’t change users, I’ll make the id user123::xYz8yUo. When a user tries to update it, we can compare the to-do’s id with the id from their authToken/cookie. This type of thing becomes critical when you have a realtime app and are editing the same document in sub-second intervals. I don’t care if you can hack an election, you still can’t change the primary key in a DB.
Now that we know the user is good, we need to see if the data is good. Sure, you probably already validated this on the client, but rule #1 of a secure web app is to never trust the client. To do this, I use a function that both validates and normalizes (eg String.trim()) the data. There are a bunch of popular libraries out there like the bloated Joi and the async Yup, but I roll my own 100 LOC solution that plugs directly into redux-form so both client and server can use the same validation schema.
Why no GraphQL custom scalars? I thought that was the way to go, but I was wrong. Unfortunately, they only flirt with being useful. They are long on verbosity and short on power. For example, if I want to validate a new user’s email address, I’ll need a regex, but I’ll also need to make sure it doesn’t already exist in my DB. If I need to validate that in the resolve function anyways, I might as well do the regex in there, too (and return a client error message far prettier than the ugly mess GraphQL gives me).
Some folks like using the GraphQL shorthand to write a big text blob schema. I don’t do that for the same reason I don’t write all my JavaScript in a text file & call eval() on it. It’s harder to write and I like the colorful error squiggles my IDE gives me. Sure, in the future an editor might detect GraphQL bits of code & lint them on the fly, but that day isn’t today.
As for descriptions, there are 2 important things to include:
The age old question: would you rather fight 1 horse-sized duck, or 100 duck-sized horses?
Honestly, I’d take the horses. I’m from the country. I’ve had to run from angry ducks.
It applies to GraphQL, too. Should you write 1 mutation to handle data from 100 different forms, or write 100 different mutations for every different way a document might be edited? While the correct answer is always “it depends”, I like to default to larger, more powerful mutations because it lets you iterate faster. If your form gets a new field, you add it to the front-end, you add it to your GraphQL schema, your validation schema, and you’re good to go. You’ll know it’s time to break it apart when you start seeing multiple massive conditional blocks in your resolve function and a growing number of arguments.
That’s it! Your intermediary course in GraphQL is complete. If you want a deep dive into auth and error handling in GraphQL, you can always check out my GraphQL Field Guide to Auth. Am I full of crap? Let me know in the comments.