A frontend developer should be able to define what data is needed for a given page, without having to worry about how the data actually gets into the frontend.
That's what a friend of mine recently said in a discussion. Why is there no simple way to universal data fetching in NextJS? To answer this question, let's have a look at the challenges involved with universal data fetching in NextJS.
But first, what actually is universal data fetching?
It's going to cover a lot of ground and will get quite deep into the details.***
If you're expecting a lightweight marketing blog, this article is not for you.
My definition of universal data fetching is that you can put a data-fetching hook anywhere in your application, and it would just work.
This data fetching hook should work everywhere in your application without any additional configuration.
Here's an example, probably the most complicated one,but I'm just too excited to not share it with you.
This is a "universal subscription" hook.
const PriceUpdates = () => {
const data = useSubscription.PriceUpdates();
return (
<div>
<h1>Universal Subscription</h1>
<p>{JSON.stringify(data)}</p>
</div>
)
}
The "PriceUpdates" hook is generated by our frameworkas we've defined a "PriceUpdates.graphql" file in our project. What's special about this hook? You're free to put React Component anywhere in your application. By default, it will server-render the first item from the subscription. The server-rendered HTML will then be sent to the client, alongside with the data.
The client will re-hydrate the application and start a subscription itself. All of this is done without any additional configuration. It works everywhere in your application, hence the name, universal data fetching.
Define the data you need, by writing a GraphQL Operation, and the framework will take care of the rest. Keep in mind that we're not trying to hide the fact that network calls are being made.
What we're doing here is to give frontend developers back their productivity. You shouldn't be worrying about how the data is fetched, how to secure the API layer, what transport to use, etc... It should just work.
If you've been using NextJS for a while, you might be asking what exactly should be hard about data fetching?
In NextJS, you can simply define an endpoint in the "/api" directory, which can then be called by using "swr" or just "fetch".
It's correct that the "Hello, world!" example of fetching data from "/api" is really simple, but scaling an application beyond the first page can quickly overwhelm the developer. Let's look at the main challenges of data fetching in NextJS.
By default, the only place where you can use async functions to load data that is required for server-side-rendering, is at the root of each page. Here's an example from the NextJS documentation:
function Page({ data }) {
// Render data...
}
// This gets called on every request
export async function getServerSideProps() {
// Fetch data from external API
const res = await fetch(`https://.../data`)
const data = await res.json()
// Pass data to the page via props
return { props: { data } }
}
export default Page
Imagine a website with hundreds of pages and components. If you have to define all data dependencies at the root of each page, how do you know what data is really needed before rendering the component tree?
Depending on the data you've loaded for root components, some logic might decide to completely change the child components. I've talked to Developers who have to maintain large NextJS applications.
They have clearly stated that fetching data in "getServerSideProps" doesn't scale well with a large number of pages and components.
Most applications have some sort of authentication mechanism. There might be some content that is publicly available, but what if you want to personalize a website?
There's going to be a need to render different content for different users. When you render user-specific content on the client only, have you noticed this ugly "flickering" effect once data comes in?
If you're only rendering the user-specific content on the client, you'll always get the effect that the page will re-render multiple times until it's ready. Ideally, our data-fetching hooks would be authentication-aware out of the box.
As we've seen in the example above using "getServerSideProps", we need to take additional actions to make our API layer type-safe. Wouldn't it better if the data-fetching hooks were type-safe by default?
So far, I've never seen anyone who applied server-side-rendering in NextJS to subscriptions. But what if you want to server-render a stock price for SEO and performance reasons, but also want to have a client-side subscription to receive updates?
Surely, you could use a Query/GET request on the server, and then add a subscription on the client, but this adds a lot of complexity. There should be a simpler way!
Another question that comes up is what should happen if the user leaves and re-enters the window. Should subscriptions be stopped or continue to stream data?
Depending on the use case and kind of application, you might want to tweak this behaviour, depending on expected user experience and the kind of data you're fetching. Our data-fetching hooks should be able to handle this.
It's quite common that mutations will have side-effects on other data-fetching hooks. E.g. you could have a list of tasks. When you add a new task, you also want to update the list of tasks. Therefore, the data-fetching hooks need to be able to handle these kinds of situations.
Another common pattern is lazy loading. You might want to load data only under certain conditions, e.g. when the user scrolls to the bottom of the page or when the user clicks a button. In such cases, our data-fetching hooks should be able to defer executing the fetch until the data is actually needed.
Another important requirement for data-fetching hooks is to debounce the execution of a Query. This is to avoid unnecessary requests to the server. Imagine a situation where a user is typing a search term in a search box.
Should you really make a request to the server every time the user types a letter? We'll see how we can use debouncing to avoid this and make our data-fetching hooks more performant.
That brings us down to 8 core problems that we need to solve. Let's now discuss 21 patterns and best practices solving these problems.
If you want to follow along and experience these patterns yourself, you can clone this repository and play around. For each pattern, there's a dedicated page in the demo.
Once you've started the demo, you can open your browser and find the patterns overview on
http://localhost:3000/patterns
.
You'll notice that we're using GraphQL to define our data-fetching hooks, but the implementation really is not GraphQL specific. You can apply the same patterns with other API styles like REST, or even with a custom API.
The first pattern we'll look at is the client-side user, it's the foundation to build authentication-aware data-fetching hooks.
Here's the hook to fetch the current user:
useEffect(() => {
if (disableFetchUserClientSide) {
return;
}
const abort = new AbortController();
if (user === null) {
(async () => {
try {
const nextUser = await ctx.client.fetchUser(abort.signal);
if (JSON.stringify(nextUser) === JSON.stringify(user)) {
return;
}
setUser(nextUser);
} catch (e) {
}
})();
}
return () => {
abort.abort();
};
}, [disableFetchUserClientSide]);
Inside our page root, we'll use this hook to fetch the current user (if it was not fetched yet on the server).
It's important to always pass the abort controller to the client, otherwise we might run into memory leaks. The returning arrow function is called when the component containing the hook is unmounted.
You'll notice that we're using this pattern throughout our application to handle potential memory leaks properly. Let's now look into the implementation of "client.fetchUser".
public fetchUser = async (abortSignal?: AbortSignal, revalidate?: boolean): Promise<User<Role> | null> => {
try {
const revalidateTrailer = revalidate === undefined ? "" : "?revalidate=true";
const response = await fetch(this.baseURL + "/" + this.applicationPath + "/auth/cookie/user" + revalidateTrailer, {
headers: {
...this.extraHeaders,
"Content-Type": "application/json",
"WG-SDK-Version": this.sdkVersion,
},
method: "GET",
credentials: "include",
mode: "cors",
signal: abortSignal,
});
if (response.status === 200) {
return response.json();
}
} catch {
}
return null;
};
You'll notice that we're not sending any client credentials, token, or anything else. We're implicitly send the secure, encrypted, http only cookie that was set by the server, which our client has no access to.
For those who don't know, http only cookies are automatically attached to each request if you're on the same domain. If you're using HTTP/2, it's also possible for client and server to apply header compression, which means that the cookie doesn't have to be sent in every request as both client and server can negotiate a map of known header key value pairs on the connection level.
The pattern that we're using behind the scenes to make authentication that simple is called the "Token Handler Pattern". The token handler pattern is the most secure way to handle authentication in modern JavaScript applications.
While very secure, it also allows us to stay agnostic to the identity provider.By applying the token handler pattern, we can easily switch between different identity providers.
That's because our "backend" is acting as an OpenID Connect Relying Party. What's a Relying Party you might ask? It's an application with an OpenID Connect client that outsources the authentication to a third party.
As we're speaking in the context of OpenID Connect, our "backend" is compatible with any service that implements the OpenID Connect protocol. This way, our backend can provide a seamless authentication experience, while developers can choose between different identity providers, like Keycloak, Auth0, Okta, Ping Identity, etc… How does the authentication flow look like from the users' perspective?
The user clicks login.
The frontend redirects the user to the backend (relying party).
The backend redirects the user to the identity provider.
The user authenticates at the identity provider.
If the authentication is successful, the identity provider redirects the user back to the backend.
The backend then exchanges the authorization code for an access and identity token.
The access and identity token are used to set a secure, encrypted, http only cookie on the client.
With the cookie set, the user is redirected back to the frontend.
From now on, when the client calls the fetchUser
method, it will automatically send the cookie to the backend. This way, the frontend always has access to the user's information while logged in. If the user clicks logout, we'll call a function on the backend that will invalidate the cookie.
All this might be a lot to digest, so let's summarize the essential bits. First, you have to tell the backend what Identity providers to work with so that it can act as a Reyling Party.
Once this is done, you're able to initiate the authentication flow from the frontend, fetch the current user from the backend, and logout. If we're wrapping this "fetchUser" call into a useEffect
hook which we place at the root our each page, we'll always know what the current user is.
However, there's a catch. If you open the demo and head over to the client-side-user page,
you'll notice that there's a flickering effect after the page is loaded, that's because the fetchUser
call is happening on the client. If you look at Chrome DevTools and open the preview of the page, you'll notice that the page is rendered with the user object set to null
.
You can click the login button to start the login flow. Once complete, refresh the page, and you'll see the flickering effect. Now that you understand the mechanics behind the token handler pattern, let's have a look at how we can remove the flickering on the first page load.
If you want to get rid of the flickering, we have to load the user on the server side so that you can apply server-side rendering. At the same time, we have to somehow get the server-side rendered user to the client. If we miss that second step, the re-hydration of the client will fail as the server-rendered html will differ from the first client-side render.
So, how do we get access to the user object on the server-side? Remember that all we've got is a cookie attached to a domain. Let's say, our backend is running on api.example.com
, and the frontend is running on www.example.com
or example.com
. If there's one important thing you should know about cookies it's that you're allowed to set cookies on parent domains if you're on a subdomain.
This means, once the authentication flow is complete, the backend should NOT set the cookie on the api.example.com
domain. Instead, it should set the cookie to the example.com
domain.
By doing so, the cookie becomes visible to all subdomains of example.com
, including www.example.com
, api.example.com
and example.com
itself.
By the way, this is an excellent pattern to implement single sign on. Have you users login once, and they are authenticated on all subdomains.
WunderGraph automatically sets cookies to the parent domain if the backend is on a subdomain, so you don't have to worry about this. Now, back to getting the user on the server side. In order to get the user on the server side, we have to implement some logic in the getInitialProps
method of our pages.
WunderGraphPage.getInitialProps = async (ctx: NextPageContext) => {
// ... omitted for brevity
const cookieHeader = ctx.req?.headers.cookie;
if (typeof cookieHeader === "string") {
defaultContextProperties.client.setExtraHeaders({
Cookie: cookieHeader,
});
}
let ssrUser: User<Role> | null = null;
if (options?.disableFetchUserServerSide !== true) {
try {
ssrUser = await defaultContextProperties.client.fetchUser();
} catch (e) {
}
}
// ... omitted for brevity
return {...pageProps, ssrCache, user: ssrUser};
The ctx
object of the getInitialProps
function contains the client request including headers.
We can do a "magic trick" so that the "API client", which we create on the server-side, can act on behalf of the user. As both frontend and backend share the same parent domain, we've got access to the cookie that was set by the backend.
So, if we take the cookie header and set it as the Cookie
header of the API client, the API client will be able to act in the context of the user, even on the server-side! We can now fetch the user on the server-side and pass the user object alongside the pageProps to the render function of the page.
Make sure to not miss this last step, otherwise the re-hydration of the client will fail. Alright, we've solved the problem of the flickering, at least when you hit refresh. But what if we've started on a different page and used client-side navigation to get to this page?
Open up the demo and try it out yourself. You'll see that the user object will be set to null
if the user was not loaded on the other page. To solve this problem as well, we have to go one step further and apply the "universal user" pattern.
The universal user pattern is the combination of the two previous patterns.
If we're hitting the page for the first time, load the user on the server-side, if possible, and render the page. On the client-side, we re-hydrate the page with the user object and don't re-fetch it, therefore there's no flickering.
In the second scenario, we're using client-side navigation to get to our page. In this case, we check if the user is already loaded. If the user object is null, we'll try to fetch it.
Great, we've got the universal user pattern in place! But there's another problem that we might face. What happens if the user opens up a second tab or window and clicks the logout button?
Open the universal-user page in the demo in two tabs or windows and try it out yourself.
If you click logout in one tab, then head back to the other tab, you'll see that the user object is still there.
The "refetch user on window focus" pattern is a solution to this problem.
Luckily, we can use the window.addEventListener
method to listen for the focus
event. This way, we get notified whenever the user activates the tab or window.
Let's add a hook to our page to handle window events.
const windowHooks = (setIsWindowFocused: Dispatch<SetStateAction<"pristine" | "focused" | "blurred">>) => {
useEffect(() => {
const onFocus = () => {
setIsWindowFocused("focused");
};
const onBlur = () => {
setIsWindowFocused("blurred");
};
window.addEventListener('focus', onFocus);
window.addEventListener('blur', onBlur);
return () => {
window.removeEventListener('focus', onFocus);
window.removeEventListener('blur', onBlur);
};
}, []);
}
You'll notice that we're introducing three possible states for the "isWindowFocused" action: pristine, focused and blurred. Why three states? Imagine if we had only two states, focused and blurred. In this case, we'd always have to fire a "focus" event, even if the window was already focused. By introducing the third state (pristine), we can avoid this.
Another important observation you can make is that we're removing the event listeners when the component unmounts. This is very important to avoid memory leaks.
Ok, we've introduced a global state for the window focus. Let's leverage this state to re-fetch the user on window focus by adding another hook:
useEffect(() => {
if (disableFetchUserClientSide) {
return;
}
if (disableFetchUserOnWindowFocus) {
return;
}
if (isWindowFocused !== "focused") {
return
}
const abort = new AbortController();
(async () => {
try {
const nextUser = await ctx.client.fetchUser(abort.signal);
if (JSON.stringify(nextUser) === JSON.stringify(user)) {
return;
}
setUser(nextUser);
} catch (e) {
}
})();
return () => {
abort.abort();
};
}, [isWindowFocused, disableFetchUserClientSide, disableFetchUserOnWindowFocus]);
By adding the isWindowFocused
state to the dependency list, this effect will trigger whenever the window focus changes. We dismiss the events "pristine" and "blurred" and only trigger a user fetch if the window is focused.
Additionally, we make sure that we're only triggering a setState for the user if they actually changed. Otherwise, we might trigger unnecessary re-renders or re-fetches.
Excellent! Our application is now able to handle authentication in various scenarios. That's a great foundation to move on to the actual data-fetching hooks.
The first data-fetching hook we'll look at is the client-side query.
You can open the demo page (http://localhost:3000/patterns/client-side-query) in your browser to get a feel for it.
const data = useQuery.CountryWeather({
input: {
code: "DE",
},
});
So, what's behind useQuery.CountryWeather
?
Let's have a look!
function useQueryContextWrapper<Input, Data, Role>(wunderGraphContext: Context<WunderGraphContextProperties<Role>>, query: QueryProps, args?: InternalQueryArgsWithInput<Input>): {
result: QueryResult<Data>;
} {
const {client} = useContext(wunderGraphContext);
const cacheKey = client.cacheKey(query, args);
const [statefulArgs, setStatefulArgs] = useState<InternalQueryArgsWithInput<Input> | undefined>(args);
const [queryResult, setQueryResult] = useState<QueryResult<Data> | undefined>({status: "none"});
useEffect(() => {
if (lastCacheKey === "") {
setLastCacheKey(cacheKey);
return;
}
if (lastCacheKey === cacheKey) {
return;
}
setLastCacheKey(cacheKey);
setStatefulArgs(args);
setInvalidate(invalidate + 1);
}, [cacheKey]);
useEffect(() => {
const abort = new AbortController();
setQueryResult({status: "loading"});
(async () => {
const result = await client.query(query, {
...statefulArgs,
abortSignal: abort.signal,
});
setQueryResult(result as QueryResult<Data>);
})();
return () => {
abort.abort();
setQueryResult({status: "cancelled"});
}
}, [invalidate]);
return {
result: queryResult as QueryResult<Data>,
}
}
Let's explain what's happening here. First, we take the client that's being injected through the React.Context.
We then calculate a cache key for the query and the arguments. This cacheKey helps us to determine whether we need to re-fetch the data.
The initial state of the operation is set to {status: "none"}
. When the first fetch is triggered, the status is set to "loading"
. When the fetch is finished, the status is set to "success"
or "error"
.
If the component wrapping this hook is being unmounted, the status is set to "cancelled"
. Other than that, nothing fancy is happening here. The fetch is only happening when useEffect is triggered.
This means that we're not able to execute the fetch on the server. React.Hooks don't execute on the server. If you look at the demo, you'll notice that there's the flickering again. This is because we're not server-rendering the component. Let's improve this!
In order to execute queries not just on the client but also on the server, we have to apply some changes to our hooks.
Let's first update the useQuery
hook.
function useQueryContextWrapper<Input, Data, Role>(wunderGraphContext: Context<WunderGraphContextProperties<Role>>, query: QueryProps, args?: InternalQueryArgsWithInput<Input>): {
result: QueryResult<Data>;
} {
const {ssrCache, client, isWindowFocused, refetchMountedOperations, user} = useContext(wunderGraphContext);
const isServer = typeof window === 'undefined';
const ssrEnabled = args?.disableSSR !== true && args?.lazy !== true;
const cacheKey = client.cacheKey(query, args);
if (isServer) {
if (ssrEnabled) {
if (ssrCache[cacheKey]) {
return {
result: ssrCache[cacheKey] as QueryResult<Data>,
}
}
const promise = client.query(query, args);
ssrCache[cacheKey] = promise;
throw promise;
} else {
ssrCache[cacheKey] = {
status: "none",
};
return {
result: ssrCache[cacheKey] as QueryResult<Data>,
}
}
}
const [invalidate, setInvalidate] = useState<number>(0);
const [statefulArgs, setStatefulArgs] = useState<InternalQueryArgsWithInput<Input> | undefined>(args);
const [lastCacheKey, setLastCacheKey] = useState<string>("");
const [queryResult, setQueryResult] = useState<QueryResult<Data> | undefined>(ssrCache[cacheKey] as QueryResult<Data> || {status: "none"});
useEffect(() => {
if (lastCacheKey === "") {
setLastCacheKey(cacheKey);
return;
}
if (lastCacheKey === cacheKey) {
return;
}
setLastCacheKey(cacheKey);
setStatefulArgs(args);
if (args?.debounceMillis !== undefined) {
setDebounce(prev => prev + 1);
return;
}
setInvalidate(invalidate + 1);
}, [cacheKey]);
useEffect(() => {
setQueryResult({status: "loading"});
(async () => {
const result = await client.query(query, {
...statefulArgs,
abortSignal: abort.signal,
});
setQueryResult(result as QueryResult<Data>);
})();
return () => {
abort.abort();
setQueryResult({status: "cancelled"});
}
}, [invalidate]);
return {
result: queryResult as QueryResult<Data>,
}
}
We've now updated the useQuery hook to check whether we're on the server or not. If we're on the server, we'll check if data was already resolved for the generated cache key. If the data was resolved, we'll return it. Otherwise, we'll use the client to execute the query using a Promise. But there's a problem. We're not allowed to execute asynchronous code while rendering on the server. So, in theory, we're not able to "wait" for the promise to resolve.
Instead, we have to use a trick. We need to "suspend" the rendering. We can do so by "throwing" the promise that we've just created. Imagine that we're rendering the enclosing component on the server. What we could do is wrap the rendering process of each component in a try/catch block. If one such component throws a promise, we can catch it, wait until the promise resolves, and then re-render the component.
Once the promise is resolved, we're able to populate the cache key with the result. This way, we can immediately return the data when we "try" to render the component for the second time. Using this method, we can move through the component tree and execute all queries that are enabled for server-side-rendering. You might be wondering how to implement this try/catch method. Luckily, we don't have to start from scratch. There's a library called [react-ssr-prepass (https://github.com/FormidableLabs/react-ssr-prepass) that we can use to do this.
Let's apply this to our getInitialProps
function:
WithWunderGraph.getInitialProps = async (ctx: NextPageContext) => {
const pageProps = (Page as NextPage).getInitialProps ? await (Page as NextPage).getInitialProps!(ctx as any) : {};
const ssrCache: { [key: string]: any } = {};
if (typeof window !== 'undefined') {
// we're on the client
// no need to do all the SSR stuff
return {...pageProps, ssrCache};
}
const cookieHeader = ctx.req?.headers.cookie;
if (typeof cookieHeader === "string") {
defaultContextProperties.client.setExtraHeaders({
Cookie: cookieHeader,
});
}
let ssrUser: User<Role> | null = null;
if (options?.disableFetchUserServerSide !== true) {
try {
ssrUser = await defaultContextProperties.client.fetchUser();
} catch (e) {
}
}
const AppTree = ctx.AppTree;
const App = createElement(wunderGraphContext.Provider, {
value: {
...defaultContextProperties,
user: ssrUser,
},
}, createElement(AppTree, {
pageProps: {
...pageProps,
},
ssrCache,
user: ssrUser
}));
await ssrPrepass(App);
const keys = Object.keys(ssrCache).filter(key => typeof ssrCache[key].then === 'function').map(key => ({
key,
value: ssrCache[key]
})) as { key: string, value: Promise<any> }[];
if (keys.length !== 0) {
const promises = keys.map(key => key.value);
const results = await Promise.all(promises);
for (let i = 0; i < keys.length; i++) {
const key = keys[i].key;
ssrCache[key] = results[i];
}
}
return {...pageProps, ssrCache, user: ssrUser};
};
The ctx
object doesn't just contain the req
object but also the AppTree
objects. Using the AppTree
object, we can build the whole component tree and inject our Context Provider, the ssrCache
object, and the user
object.
We can then use the ssrPrepass
function to traverse the component tree and execute all queries that are enabled for server-side-rendering. After doing so, we extract the results from all Promises and populate the ssrCache
object.
Finally, we return the pageProps
object and the ssrCache
object as well as the user
object.
Fantastic! We're now able to apply server-side-rendering to our useQuery hook! It's worth mentioning that we've completely decoupled server-side rendering from having to implement getServerSideProps
in our Page
component. This has a few effects that are important to discuss.
First, we've solved the problem that we have to declare our data dependencies in getServerSideProps
. We're free to put our useQuery hooks anywhere in the component tree, they will always be executed. On the other hand, this approach has the disadvantage that this page will not be statically optimized. Instead, the page will always be server-rendered, meaning that there needs to be a server running to serve the page.
Another approach would be to build a statically rendered page, which can be served entirely from a CDN. That said, we're assuming in this guide that your goal is to serve dynamic content that changed depending on the user. In this scenario, statically rendering the page won't be an option as we don't have any user context when fetching the data.
It's great what we've accomplished so far. But what should happen if the user leaves the window for a while and comes back? Could the data that we've fetched in the past be outdated? If so, how can we deal with this situation? Onto the next pattern!
Luckily, we've already implemented a global context object to propagate the three different window focus states, pristine, blurred, and focused.
Let's leverage the "focused" state to trigger a re-fetch of the query. Remember that we were using the "invalidate" counter to trigger a re-fetch of the query. We can add a new effect to increase this counter whenever the window is focused.
useEffect(() => {
if (!refetchOnWindowFocus) {
return;
}
if (isWindowFocused !== "focused") {
return;
}
setInvalidate(prev => prev + 1);
}, [refetchOnWindowFocus, isWindowFocused]);
That's it! We dismiss all events if refetchOnWindowFocus is set to false or the window is not focused. Otherwise, we increase the invalidate counter and trigger a re-fetch of the query.
If you're following along with the demo, have a look at the refetch-query-on-window-focus page.
The hook, including configuration, looks like this:
const data = useQuery.CountryWeather({
input: {
code: "DE",
},
disableSSR: true,
refetchOnWindowFocus: true,
});
That was a quick one! Let's move on to the next pattern, lazy loading.
As discussed in the problem statement, some of our operations should be executed only after a specific event. Until then, the execution should be deferred.
Let's have a look at the lazy-query page.
const [args,setArgs] = useState<QueryArgsWithInput<CountryWeatherInput>>({
input: {
code: "DE",
},
lazy: true,
});
Setting lazy to true configures the hook to be "lazy". Now, let's look at the implementation:
useEffect(() => {
if (lazy && invalidate === 0) {
setQueryResult({
status: "lazy",
});
return;
}
const abort = new AbortController();
setQueryResult({status: "loading"});
(async () => {
const result = await client.query(query, {
...statefulArgs,
abortSignal: abort.signal,
});
setQueryResult(result as QueryResult<Data>);
})();
return () => {
abort.abort();
setQueryResult({status: "cancelled"});
}
}, [invalidate]);
const refetch = useCallback((args?: InternalQueryArgsWithInput<Input>) => {
if (args !== undefined) {
setStatefulArgs(args);
}
setInvalidate(prev => prev + 1);
}, []);
When this hook is executed for the first time, lazy will be set to true and invalidate will be set to 0. This means that the effect hook will return early and set the query result to "lazy".
A fetch is not executed in this scenario.
If we want to execute the query, we have to increase invalidate by 1. We can do so by calling refetch
on the useQuery hook. That's it! Lazy loading is now implemented.
Let's move on to the next problem: Debouncing user inputs to not fetch the query too often.
Let's say the user want to get the weather for a specific city. My home-town is "Frankfurt am Main", right in the middle of Germany.
That search term is 17 characters long. How often should we fetch the query while the user is typing? 17 times? Once? Maybe twice?
The answer will be somewhere in the middle, but it's definitely not 17 times. So, how can we implement this behavior? Let's have a look at the useQuery hook implementation.
useEffect(() => {
if (debounce === 0) {
return;
}
const cancel = setTimeout(() => {
setInvalidate(prev => prev + 1);
}, args?.debounceMillis || 0);
return () => clearTimeout(cancel);
}, [debounce]);
useEffect(() => {
if (lastCacheKey === "") {
setLastCacheKey(cacheKey);
return;
}
if (lastCacheKey === cacheKey) {
return;
}
setLastCacheKey(cacheKey);
setStatefulArgs(args);
if (args?.debounceMillis !== undefined) {
setDebounce(prev => prev + 1);
return;
}
setInvalidate(invalidate + 1);
}, [cacheKey]);
Let's first have a look at the second useEffect, the one that has the cacheKey as a dependency. You can see that before increasing the invalidate counter, we check if the arguments of the operation contain a debounceMillis property.
If so, we don't immediately increase the invalidate counter. Instead, we increase the debounce counter.
Increasing the debounce counter will trigger the first useEffect, as the debounce counter is a dependency. If the debounce counter is 0, which is the initial value, we immediately return, as there is nothing to do. Otherwise, we start a timer using setTimeout.
Once the timeout is triggered, we increase the invalidate counter. What's special about the effect using setTimeout is that we're leveraging the return function of the effect hook to clear the timeout.
What this means is that if the user types faster than the debounce time, the timer is always cleared and the invalidate counter is not increased. Only when the full debounce time has passed, the invalidate counter is increased.
I see it often that developers use setTimeout but forget to handle the returning object. Not handling the return value of setTimeout might lead to memory leaks, as it's also possible that the enclosing React component unmounts before the timeout is triggered. If you're interested to play around, head over to the demo and try typing different search terms using various debounce times.
Great! We've got a nice solution to debounce user inputs.
Let's now look operations that require the user to be authenticated. We'll start with a server-side protected Query.
Let's say we're rendering a dashboard that requires the user to be authenticated. The dashboard will also show user-specific data. How can we implement this?
Again, we have to modify the useQuery hook.
const {ssrCache, client, isWindowFocused, refetchMountedOperations, user} = useContext(wunderGraphContext);
const isServer = typeof window === 'undefined';
const ssrEnabled = args?.disableSSR !== true && args?.lazy !== true;
const cacheKey = client.cacheKey(query, args);
if (isServer) {
if (query.requiresAuthentication && user === null) {
ssrCache[cacheKey] = {
status: "requires_authentication"
};
return {
result: ssrCache[cacheKey] as QueryResult<Data>,
refetch: () => {
},
};
}
if (ssrEnabled) {
if (ssrCache[cacheKey]) {
return {
result: ssrCache[cacheKey] as QueryResult<Data>,
refetch: () => Promise.resolve(ssrCache[cacheKey] as QueryResult<Data>),
}
}
const promise = client.query(query, args);
ssrCache[cacheKey] = promise;
throw promise;
} else {
ssrCache[cacheKey] = {
status: "none",
};
return {
result: ssrCache[cacheKey] as QueryResult<Data>,
refetch: () => ({}),
}
}
}
As we've discussed in pattern 2, Server-Side User,
we've already implemented some logic to fetch the user object in getInitialProps
and inject it into the context.
We also injected the user cookie into the client which is also injected into the context. Together, we're ready to implement the server-side protected query.
If we're on the server, we check if the query requires authentication. This is static information that is defined in the query metadata. If the user object is null, meaning that the user is not authenticated, we return a result with the status "requires_authentication".
Otherwise, we move forward and throw a promise or return the result from the cache. If you go to server-side protected query on the demo, you can play with this implementation and see how it behaves when you log in and out.
That's it, no magic. That wasn't too complicated, was it? Well, the server disallows hooks, which makes the logic a lot easier. Let's now look at what's required to implement the same logic on the client.
To implement the same logic for the client, we need to modify the useQuery hook once again.
useEffect(() => {
if (query.requiresAuthentication && user === null) {
setQueryResult({
status: "requires_authentication",
});
return;
}
if (lazy && invalidate === 0) {
setQueryResult({
status: "lazy",
});
return;
}
const abort = new AbortController();
if (queryResult?.status === "ok") {
setQueryResult({...queryResult, refetching: true});
} else {
setQueryResult({status: "loading"});
}
(async () => {
const result = await client.query(query, {
...statefulArgs,
abortSignal: abort.signal,
});
setQueryResult(result as QueryResult<Data>);
})();
return () => {
abort.abort();
setQueryResult({status: "cancelled"});
}
}, [invalidate, user]);
As you can see, we've now added the user object to the dependencies of the effect. If the query requires authentication, but the user object is null, we set the query result to "requires_authentication" and return early, no fetch is happening. If we pass this check, the query is fired as usual. Making the user object a dependency of the fetch effect also has two nice side-effects.
Let's say, a query requires the user to be authenticated, but they are currently not. The initial query result is "requires_authentication". If the user now logs in, the user object is updated through the context object.
As the user object is a dependency of the fetch effect, all queries are now fired again, and the query result is updated.
On the other hand, if a query requires the user to be authenticated, and the user just logged out, we'll automatically invalidate all queries and set the results to "requires_authentication". Excellent! We've now implemented the client-side protected query pattern. But that's not yet the ideal outcome.
If you're using server-side protected queries, client-side navigation is not handled properly. On the other hand, if we're only using client-side protected queries, we will always have the nasty flickering again. To solve these issues, we have to put both of these patterns together, which leads us to the universal-protected query pattern.
This pattern doesn't require any additional changes as we've already implemented all the logic. All we have to do is configure our page to activate the universal-protected query pattern.
Here's the code from the universal-protected query page:
const UniversalProtectedQuery = () => {
const {user,login,logout} = useWunderGraph();
const data = useQuery.ProtectedWeather({
input: {
city: "Berlin",
},
});
return (
<div>
<h1>Universal Protected Query</h1>
<p>{JSON.stringify(user)}</p>
<p>{JSON.stringify(data)}</p>
<button onClick={() => login(AuthProviders.github)}>Login</button>
<button onClick={() => logout()}>Logout</button>
</div>
)
}
export default withWunderGraph(UniversalProtectedQuery);
Have a play with the demo and see how it behaves when you log in and out. Also try to refresh the page or use client-side navigation. What's cool about this pattern is how simple the actual implementation of the page is. The "ProtectedWeather" query hook abstracts away all the complexity of handling authentication, both client- and server-side.
Right, we've spent a lot of time on queries so far, what about mutations? Let's start with an unprotected mutation, one that doesn't require authentication. You'll see that mutation hooks are a lot easier to implement than the query hooks.
function useMutationContextWrapper<Role, Input = never, Data = never>(wunderGraphContext: Context<WunderGraphContextProperties<Role>>, mutation: MutationProps): {
result: MutationResult<Data>;
mutate: (args?: InternalMutationArgsWithInput<Input>) => Promise<MutationResult<Data>>;
} {
const {client, user} = useContext(wunderGraphContext);
const [result, setResult] = useState<MutationResult<Data>>(mutation.requiresAuthentication && user === null ? {status: "requires_authentication"} : {status: "none"});
const mutate = useCallback(async (args?: InternalMutationArgsWithInput<Input>): Promise<MutationResult<Data>> => {
setResult({status: "loading"});
const result = await client.mutate(mutation, args);
setResult(result as any);
return result as any;
}, []);
return {
result,
mutate
}
}
Mutations are not automatically triggered. This means, we're not using
useEffect to trigger the mutation. Instead, we're leveraging the useCallback hook to create a "mutate" function that can be called. Once called, we set the state of the result to "loading" and then call the mutation.
When the mutation is finished, we set the state of the result to the mutation result.This might be a success or a failure. Finally, we return both the result and the mutate function.
Have a look at the unprotected mutation page if you want to play with this pattern.
This was pretty much straight forward.
Let's add some complexity by adding authentication.
function useMutationContextWrapper<Role, Input = never, Data = never>(wunderGraphContext: Context<WunderGraphContextProperties<Role>>, mutation: MutationProps): {
result: MutationResult<Data>;
mutate: (args?: InternalMutationArgsWithInput<Input>) => Promise<MutationResult<Data>>;
} {
const {client, user} = useContext(wunderGraphContext);
const [result, setResult] = useState<MutationResult<Data>>(mutation.requiresAuthentication && user === null ? {status: "requires_authentication"} : {status: "none"});
const mutate = useCallback(async (args?: InternalMutationArgsWithInput<Input>): Promise<MutationResult<Data>> => {
if (mutation.requiresAuthentication && user === null) {
return {status: "requires_authentication"}
}
setResult({status: "loading"});
const result = await client.mutate(mutation, args);
setResult(result as any);
return result as any;
}, [user]);
useEffect(() => {
if (!mutation.requiresAuthentication) {
return
}
if (user === null) {
if (result.status !== "requires_authentication") {
setResult({status: "requires_authentication"});
}
return;
}
if (result.status !== "none") {
setResult({status: "none"});
}
}, [user]);
return {
result,
mutate
}
}
Similarly to the protected query pattern, we're injecting the user object from the context into the callback. If the mutation requires authentication, we check if the user is null. If the user is null, we set the result to "requires_authentication" and return early.
Additionally, we add an effect to check if the user is null. If the user is null, we set the result to "requires_authentication". We've done this so that mutations turn automatically into the "requires_authentication" or "none" state, depending on whether the user is authenticated or not.
Otherwise, you'd first have to call the mutation to figure out that it's not possible to call the mutation. I think it gives us a better developer experience when it's clear upfront if the mutation is possible or not.
Alright, protected mutations are now implemented. You might be wondering why there's no section on server-side mutations, protected or not.
That's because mutations are always triggered by user interaction. So, there's no need for us to implement anything on the server.
That said, there's one problem left with mutations, side effects! What happens if there's a dependency between a list of tasks and a mutation that changes the tasks? Let's make it happen!
For this to work, we need to change both the mutation callback and the query hook. Let's start with the mutation callback.
const {client, setRefetchMountedOperations, user} = useContext(wunderGraphContext);
const mutate = useCallback(async (args?: InternalMutationArgsWithInput<Input>): Promise<MutationResult<Data>> => {
if (mutation.requiresAuthentication && user === null) {
return {status: "requires_authentication"}
}
setResult({status: "loading"});
const result = await client.mutate(mutation, args);
setResult(result as any);
if (result.status === "ok" && args?.refetchMountedOperationsOnSuccess === true) {
setRefetchMountedOperations(prev => prev + 1);
}
return result as any;
}, [user]);
Our goal is to invalidate all currently mounted queries when a mutation is successful. We can do so by introducing yet another global state object which is stored and propagated through the React context.
We call this state object "refetchMountedOperationsOnSuccess", which is a simple counter. In case our mutation callback was successful, we want to increment the counter. This should be enough to invalidate all currently mounted queries.
The second step is to change the query hook.
const {ssrCache, client, isWindowFocused, refetchMountedOperations, user} = useContext(wunderGraphContext);
useEffect(() => {
if (queryResult?.status === "lazy" || queryResult?.status === "none") {
return;
}
setInvalidate(prev => prev + 1);
}, [refetchMountedOperations]);
You should be familiar with the "invalidate" counter already. We're now adding another effect to handle the increment of the "refetchMountedOperations" counter that was injected from the context. You might be asking why we're returning early if the status is "lazy" or "none"?
In case of "lazy", we know that this query was not yet executed, and it's the intention by the developer to only execute it when manually triggered. So, we're skipping lazy queries and wait until they are triggered manually. In case of "none", the same rule applies.
This could happen, e.g. if a query is only server-side-rendered, but we've navigated to the current page via client-side navigation. In such a case, there's nothing we could "invalidate", as the query was not yet executed. We also don't want to accidentally trigger queries that were not yet executed via a mutation side effect.
Want to experience this in action? Head over to the Refetch Mounted Operations on Mutation Success page. Cool! We're done with queries and mutations. Next, we're going to look at implementing hooks for subscriptions.
To implement subscriptions, we have to create a new dedicated hook:
function useSubscriptionContextWrapper<Input, Data, Role>(wunderGraphContext: Context<WunderGraphContextProperties<Role>>, subscription: SubscriptionProps, args?: InternalSubscriptionArgsWithInput<Input>): {
result: SubscriptionResult<Data>;
} {
const {ssrCache, client} = useContext(wunderGraphContext);
const cacheKey = client.cacheKey(subscription, args);
const [invalidate, setInvalidate] = useState<number>(0);
const [subscriptionResult, setSubscriptionResult] = useState<SubscriptionResult<Data> | undefined>(ssrCache[cacheKey] as SubscriptionResult<Data> || {status: "none"});
useEffect(() => {
if (subscriptionResult?.status === "ok") {
setSubscriptionResult({...subscriptionResult, streamState: "restarting"});
} else {
setSubscriptionResult({status: "loading"});
}
const abort = new AbortController();
client.subscribe(subscription, (response: SubscriptionResult<Data>) => {
setSubscriptionResult(response as any);
}, {
...args,
abortSignal: abort.signal
});
return () => {
abort.abort();
}
}, [invalidate]);
return {
result: subscriptionResult as SubscriptionResult<Data>
}
}
The implementation of this hook is similar to the query hook. It's automatically triggered when the enclosing component mounts, so we're using the "useEffect" hook again. It's important to pass an abort signal to the client to ensure that the subscription is aborted when the component unmounts.
Additionally, we want to cancel and re-start the subscription when the invalidate counter, similar to the query hook, is incremented. We've omitted authentication for brevity at this point, but you can assume that it's very similar to the query hook.
Want to play with the example? Head over to the Client-Side Subscription page.
One thing to note, though, is that subscriptions behave differently from queries. Subscriptions are a stream of data that is continuously updated. This means that we have to think about how long we want to keep the subscription open. Should it stay open forever?
Or could there be the case where we want to stop and resume the subscription? One such case is when the user blurs the window, meaning that they're not actively using the application anymore.
In order to stop the subscription when the user blurs the window, we need to extend the subscription hook:
function useSubscriptionContextWrapper<Input, Data, Role>(wunderGraphContext: Context<WunderGraphContextProperties<Role>>, subscription: SubscriptionProps, args?: InternalSubscriptionArgsWithInput<Input>): {
result: SubscriptionResult<Data>;
} {
const {ssrCache, client, isWindowFocused, refetchMountedOperations, user} = useContext(wunderGraphContext);
const isServer = typeof window === 'undefined';
const ssrEnabled = args?.disableSSR !== true;
const cacheKey = client.cacheKey(subscription, args);
const [stop, setStop] = useState(false);
const [invalidate, setInvalidate] = useState<number>(0);
const [stopOnWindowBlur] = useState(args?.stopOnWindowBlur === true);
const [subscriptionResult, setSubscriptionResult] = useState<SubscriptionResult<Data> | undefined>(ssrCache[cacheKey] as SubscriptionResult<Data> || {status: "none"});
useEffect(() => {
if (stop) {
if (subscriptionResult?.status === "ok") {
setSubscriptionResult({...subscriptionResult, streamState: "stopped"});
} else {
setSubscriptionResult({status: "none"});
}
return;
}
if (subscriptionResult?.status === "ok") {
setSubscriptionResult({...subscriptionResult, streamState: "restarting"});
} else {
setSubscriptionResult({status: "loading"});
}
const abort = new AbortController();
client.subscribe(subscription, (response: SubscriptionResult<Data>) => {
setSubscriptionResult(response as any);
}, {
...args,
abortSignal: abort.signal
});
return () => {
abort.abort();
}
}, [stop, refetchMountedOperations, invalidate, user]);
useEffect(() => {
if (!stopOnWindowBlur) {
return
}
if (isWindowFocused === "focused") {
setStop(false);
}
if (isWindowFocused === "blurred") {
setStop(true);
}
}, [stopOnWindowBlur, isWindowFocused]);
return {
result: subscriptionResult as SubscriptionResult<Data>
}
}
For this to work, we introduce a new stateful variable called "stop". The default state will be false, but when the user blurs the window, we'll set the state to true. If they re-enter the window (focus), we'll set the state back to false. If the developer set "stopOnWindowBlur" to false, we'll ignore this, which can be configured in the "args" object of the subscriptions.
Additionally, we have to add the stop variable to the subscription dependencies. That's it! It's quite handy that we've handled the window events globally, this makes all other hooks a lot easier to implement.
The best way to experience the implementation is to open the [Client-Side Subscription (http://localhost:3000/patterns/client-side-subscription) page and carefully watch the network tab in the Chrome DevTools console (or similar if you're using another browser).
Coming back to one of the problems we've described initially, we still have to give an answer to the question of how we can implement server-side rendering for subscriptions, making the subscriptions hook "universal".
You might be thinking that server-side rendering is not possible for subscriptions. I mean, how should you server-render a stream of data?
If you're a regular reader of this blog, you might be aware of our Subscription Implementation. [As we've described in another blog (/blog/deprecate_graphql_subscriptions_over_websockets), we've implemented GraphQL subscriptions in a way that is compatible with the EventSource (SSE) as well as the Fetch API.
We've also added one special flag to the implementation. The client can set the query parameter "wg_subscribe_once" to true. What this means is that a subscription, with this flag set, is essentially a query.
Here's the implementation of the client to fetch a query:
const params = this.queryString({
wg_variables: args?.input,
wg_api_hash: this.applicationHash,
wg_subscribe_once: args?.subscribeOnce,
});
const headers: Headers = {
...this.extraHeaders,
Accept: "application/json",
"WG-SDK-Version": this.sdkVersion,
};
const defaultOrCustomFetch = this.customFetch || globalThis.fetch;
const url = this.baseURL + "/" + this.applicationPath + "/operations/" + query.operationName + params;
const response = await defaultOrCustomFetch(url,
{
headers,
method: 'GET',
credentials: "include",
mode: "cors",
}
);
We take the variables, a hash of the configuration, and the subscribeOnce flag and encode them into the query string.
If subscribe once is set, it's clear to the server that we only want the first result of the subscription.
To give you the full picture, let's also look at the implementation for client-side subscriptions:
private subscribeWithSSE = <S extends SubscriptionProps, Input, Data>(subscription: S, cb: (response: SubscriptionResult<Data>) => void, args?: InternalSubscriptionArgs) => {
(async () => {
try {
const params = this.queryString({
wg_variables: args?.input,
wg_live: subscription.isLiveQuery ? true : undefined,
wg_sse: true,
wg_sdk_version: this.sdkVersion,
});
const url = this.baseURL + "/" + this.applicationPath + "/operations/" + subscription.operationName + params;
const eventSource = new EventSource(url, {
withCredentials: true,
});
eventSource.addEventListener('message', ev => {
const responseJSON = JSON.parse(ev.data);
// omitted for brevity
if (responseJSON.data) {
cb({
status: "ok",
streamState: "streaming",
data: responseJSON.data,
});
}
});
if (args?.abortSignal) {
args.abortSignal.addEventListener("abort", () => eventSource.close());
}
} catch (e: any) {
// omitted for brevity
}
})();
};
The implementation of the subscription client looks similar to the query client, except that we use the EventSource API with a callback. If EventSource is not available, we fall back to the Fetch API, but I'll keep the implementation out of the blog post as it doesn't add much extra value.
The only important thing you should take away from this is that we add a listener to the abort signal. If the enclosing component unmounts or invalidates, it will trigger the abort event, which will close the EventSource.
Keep in mind, if we're doing asynchronous work of any kind, we always need to make sure that we handle cancellation properly, otherwise we might end up with a memory leak. OK, you're now aware of the implementation of the subscription client. Let's wrap the client with easy-to-use subscription hooks that can be used both on the client and on the server.
const {ssrCache, client, isWindowFocused, refetchMountedOperations, user} = useContext(wunderGraphContext);
const isServer = typeof window === 'undefined';
const ssrEnabled = args?.disableSSR !== true;
const cacheKey = client.cacheKey(subscription, args);
if (isServer) {
if (ssrEnabled) {
if (ssrCache[cacheKey]) {
return {
result: ssrCache[cacheKey] as SubscriptionResult<Data>
}
}
const promise = client.query(subscription, {...args, subscribeOnce: true});
ssrCache[cacheKey] = promise;
throw promise;
} else {
ssrCache[cacheKey] = {
status: "none",
}
return {
result: ssrCache[cacheKey] as SubscriptionResult<Data>
}
}
}
Similarly to the useQuery hook, we add a code branch for the server-side rendering. If we're on the server and don't yet have any data, we make a "query" request with the subscribeOnce flag set to true.
As described above, a subscription with the flag subscribeOnce set to true, will only return the first result, so it behaves like a query. That's why we use client.query()
instead of client.subscribe()
. Some comments on the blog post about our subscription implementation indicated that it's not that important to make subscriptions stateless.
I hope that at this point its clear why we've gone this route. Fetch support just landed in NodeJS, and even before that we've had node-fetch as a polyfill. It would definitely be possible to initiate subscriptions on the server using WebSockets, but ultimately I think it's much easier to just use the Fetch API and not have to worry about WebSocket connections on the server.
The best way to play around with this implementation is to go to the universal subscription page. When you refresh the page, have a look at the "preview" of the first request. You'll see that the page will come server-rendered compared to the client-side subscription. Once the client is re-hydrated, it'll start a subscription by itself to keep the user interface updated.
That was a lot of work, but we're not yet done. Subscriptions should also be protected using authentication, let's add some logic to the subscription hook.
You'll notice that it's very similar to a regular query hook.
const {ssrCache, client, isWindowFocused, refetchMountedOperations, user} = useContext(wunderGraphContext);
const [subscriptionResult, setSubscriptionResult] = useState<SubscriptionResult<Data> | undefined>(ssrCache[cacheKey] as SubscriptionResult<Data> || {status: "none"});
useEffect(() => {
if (subscription.requiresAuthentication && user === null) {
setSubscriptionResult({
status: "requires_authentication",
});
return;
}
if (stop) {
if (subscriptionResult?.status === "ok") {
setSubscriptionResult({...subscriptionResult, streamState: "stopped"});
} else {
setSubscriptionResult({status: "none"});
}
return;
}
if (subscriptionResult?.status === "ok") {
setSubscriptionResult({...subscriptionResult, streamState: "restarting"});
} else {
setSubscriptionResult({status: "loading"});
}
const abort = new AbortController();
client.subscribe(subscription, (response: SubscriptionResult<Data>) => {
setSubscriptionResult(response as any);
}, {
...args,
abortSignal: abort.signal
});
return () => {
abort.abort();
}
}, [stop, refetchMountedOperations, invalidate, user]);
First, we have to add the user as a dependency to the effect. This will make the effect trigger whenever the user changes. Then, we have to check the meta-data of the subscription and see if it requires authentication.
If it does, we check if the user is logged in. If the user is logged in, we continue with the subscription. If the user is not logged in, we set the subscription result to "requires_authentication".
That's it! Authentication-aware universal Subscriptions done! Let's have a look at our end-result:
const ProtectedSubscription = () => {
const {login,logout,user} = useWunderGraph();
const data = useSubscription.ProtectedPriceUpdates();
return (
<div>
<p>{JSON.stringify(user)}</p>
<p style={{height: "8vh"}}>{JSON.stringify(data)}</p>
<button onClick={() => login(AuthProviders.github)}>Login</button>
<button onClick={() => logout()}>Logout</button>
</div>
)
}
export default withWunderGraph(ProtectedSubscription);
Isn't it great how we're able to hide so much complexity behind a simple API? All these things, like authentication, window focus and blur, server-side rendering, client-side rendering, passing data from server to client, proper re-hydration of the client, it's all handled for us.
On top of that, the client is mostly using generics and wrapped by a small layer of generated code, making the whole client fully type-safe. Type-safety was one of our requirements if you remember.
Some API clients "can" be type-safe. Others allow you to add some extra code to make them type-safe. With our approach, a generic client plus auto-generated types, the client is always type-safe.
It's a manifest for us that so far, nobody has asked us to add a "pure" JavaScript client. Our users seem to accept and appreciate that everything is type-safe out of the box. We believe that type-safety helps developers to make less errors and to better understand their code.
Want to play with protected, universal subscriptions yourself? Check out the protected-subscription page of the demo. Don't forget to check Chrome DevTools and the network tab to get the best insights. Finally, we're done with subscriptions. Two more patterns to go, and we're done completely.
The last pattern we're going to cover is Live Queries. Live Queries are similar to Subscriptions in how they behave on the client side. Where they differ is on the server side. Let's first discuss how live queries work on the server and why they are useful. If a client "subscribes" to a live query, the server will start to poll the origin server for changes.
It will do so in a configurable interval, e.g. every one second. When the server receives a change, it will hash the data and compare it to the hash of the last change. If the hashes are different, the server will send the new data to the client. If the hashes are the same, we know that nothing changed, so we don't send anything to the client.
Why and when are live queries useful?
First, a lot of existing infrastructure doesn't support subscriptions. Adding live-queries at the gateway level means that you're able to add "real-time" capabilities to your existing infrastructure. You could have a legacy PHP backend which you don't want to touch anymore. Add live queries on top of it and your frontend will be able to receive real-time updates.
You might be asking why not just do the polling from the client side? Client-side polling could result in a lot of requests to the server. Imagine if 10.000 clients make one request per second. That's 10.000 requests per second. Do you think your legacy PHP backend can handle that kind of load?
10.000 clients connect to the api gateway and subscribe to a live query. The gateway can then bundle all the requests together, as they are essentially asking for the same data, and make one single request to the origin.
Using live-queries, we're able to reduce the number of requests to the origin server, depending on how many "streams" are being used.
So, how can we implement live-queries on the client?
Have a look at the "generated" wrapper around the generic client for one of our operations:
CountryWeather: (args: SubscriptionArgsWithInput<CountryWeatherInput>) =>
hooks.useSubscriptionWithInput<CountryWeatherInput, CountryWeatherResponseData, Role>(WunderGraphContext, {
operationName: "CountryWeather",
isLiveQuery: true,
requiresAuthentication: false,
})(args)
Looking at this example, you can notice a few things.
First, we're using the useSubscriptionWithInput
hook.
This indicates that we actually don't have to distinguish between a subscription and a live query, at least not from a client-side perspective.
The only difference is that we're setting the isLiveQuery
flag to true
. For subscriptions, we're using the same hook, but set the isLiveQuery
flag to false
.
As we've already implemented the subscription hook above, there's no additional code required to make live-queries work. Check out the live-query page of the demo. One thing you might notice is that this example has the nasty flickering again, that's because we're not server-side rendering it.
The final and last pattern we're going to cover is Universal Live Queries. Universal Live Queries are similar to Subscriptions, just simpler from the server-side perspective.
For the server, to initiate a subscription, it has to open a WebSocket connection to the origin server, make the handshake, subscribe, etc... If we need to subscribe once with a live query, we're simply "polling" once, which means, we're just making a single request.
So, live queries are actually a bit faster to initiate compared to subscriptions, at least on the initial request. How can we use them? Let's look at an example from the demo:
const UniversalLiveQuery = () => {
const data = useLiveQuery.CountryWeather({
input: {
code: "DE",
},
});
return (
<p>{JSON.stringify(data)}</p>
)
}
export default withWunderGraph(UniversalLiveQuery);
That's it, that's your stream of weather data for the capital of Germany, Berlin, which is being updated every second.
You might be wondering how we've got the data in the first place. Let's have a look at the definition of the CountryWeather
operation:
query ($capital: String! @internal $code: ID!) {
countries_country(code: $code){
code
name
capital @export(as: "capital")
weather: _join @transform(get: "weather_getCityByName.weather") {
weather_getCityByName(name: $capital){
weather {
temperature {
actual
}
summary {
title
description
}
}
}
}
}
}
We're actually joining data from two disparate services. First, we're using a countries API to get the capital of a country. We export the field capital
into the internal $capital
variable.
Then, we're using the _join
field to combine the country data with a weather API. Finally, we apply the @transform
directive to flatten the response a bit.
It's a regular, valid, GraphQL query. Combined with the live-query pattern, we're now able to live-stream the weather for any capital of any country. Cool, isn't it?
Similar to all the other patterns, this one can also be tried and tested on the demo. Head over to the universal-live-query page and have a play!
That's it! We're done!
I hope you've learned how you're able to build universal, authentication-aware data-fetching hooks. Before we're coming to an end of this post, I'd like to look at alternative approaches and tools to implement data fetching hooks.
One major drawback of using server-side rendering is that the client has to wait until the server has finished rendering the page. Depending on the complexity of the page, this might take a while, especially if you have to make many chained requests to fetch all the data required for the page.
One solution to this problem is to statically generate the page on the server. NextJS allows you to implement an asynchronous getStaticProps
function on top of each page.
This function is called at built time, and it's responsible for fetching all the data required for the page. If, at the same time, you don't attach a getInitialProps
or getServerSideProps
function to the page, NextJS considers this page to be static, meaning that no NodeJS process will be required to render the page. In this scenario, the page will be pre-rendered at compile time, allowing it to be cached by a CDN.
This way of rendering makes the application extremely fast and easy to host, but there's also drawbacks. For one, a static page is not user-specific. That's because at built time, there's no context of the user. This is not a problem for public pages though. It's just that you can't use user-specific pages like dashboards this way.
A tradeoff that can be made is to statically render the page and add user-specific content on the client side. However, this will always introduce flickering on the client, as the page will update very shortly after the initial render.
So, if you're building an application that requires the user to be authenticated, you might want to use server-side rendering instead.
The second drawback of static site generation is that content can become outdated if the underlying data changes. In that case, you might want to re-build the page. However, rebuilding the whole page might take a long time and might be unnecessary if only a few pages need to be rebuilt. Luckily, there's a solution to this problem: Incremental Static Regeneration.
Incremental Static Regeneration allows you to invalidate individual pages and re-render them on demand. This gives you the performance advantage of a static site, but removes the problem of outdated content.
That said, this still doesn't solve the problem with authentication, but I don't think this is what static site generation is all about. On our end, we're currently looking at patterns where the result of a Mutation could automatically trigger a page-rebuild using ISR. Ideally, this could be something that works in a declarative way, without having to implement custom logic.
One issue that you might run into with server-side rendering (but also client-side) is that while traversing the component tree, the server might have to create a huge waterfall of queries that depend on each other.
If child components depend on data from their parents, you might easily run into the N+1 problem. N+1 in this case means that you fetch an array of data in a root component, and then for each of the array items, you'll have to fire an additional query in a child component.
Keep in mind that this problem is not specific to using GraphQL. GraphQL actually has a solution to solve it while REST APIs suffer from the same problem. The solution is to use GraphQL fragments with a client that properly supports them.
The creators of GraphQL, Facebook / Meta, have created a solution for this problem, it's called the Relay Client.
The Relay Client is a library that allows you to specify your "Data Requirements" side-by-side with the components via GraphQL fragments. Here's an example of how this could look like:
import type {UserComponent_user$key} from 'UserComponent_user.graphql';
const React = require('React');
const {graphql, useFragment} = require('react-relay');
type Props = {
user: UserComponent_user$key,
};
function UserComponent(props: Props) {
const data = useFragment(
graphql`
fragment UserComponent_user on User {
name
profile_picture(scale: 2) {
uri
}
}
`,
props.user,
);
return (
<>
<h1>{data.name}</h1>
<div>
<img src={data.profile_picture?.uri} />
</div>
</>
);
}
If this was a nested component, the fragment allows us hoist our data requirements up to the root component. This means that the root component will be capable of fetching the data for its children, while keeping the data requirements definition in the child components.
Fragments allow for a loose coupling between parent and child components, while allowing for a more efficient data fetching process. For a lot of developers, this is the actual reason why they are using GraphQL. It's not that they use GraphQL because they want to use the Query Language, it's because they want to leverage the power of the Relay Client.
For us, the Relay Client is a great source of inspiration.
I actually think that using Relay is too hard. In our next iteration, we're looking at adopting the "Fragment hoisting" approach, but our goal is to make it easier to use than the Relay Client.
Another development that's happening in the React world is the creation of React Suspense. As you've seen above, we're already using Suspense on the server. By "throwing" a promise, we're able to suspend the rendering of a component until the promise is resolved. That's an excellent way to handle asynchronous data fetching on the server.
However, you're also able to apply this technique on the client. Using Suspense on the client allows us to "render-while-fetching" in a very efficient way. Additionally, clients that support Suspense allow for a more elegant API for data fetching hooks. Instead of having to handle "loading" or "error" states within the component, suspense will "push" these states to the next "error boundary" and handles them there.
This approach makes the code within the component a lot more readable as it only handles the "happy path". As we're already supporting Suspense on the server, you can be sure that we're adding client support in the future as well. We just want to figure out the most idiomatic way of supporting both a suspense and a non-suspense client.
This way, users get the freedom to choose the programming style they prefer.
We're not the only ones who try to improve the data fetching experience in NextJS. Therefore, let's have a quick look at other technologies and how they compare to the approach we're proposing.
We've actually taken a lot of inspiration from swr. If you look at the patterns we've implemented, you'll see that swr really helped us to define a great data fetching API.
There's a few things where our approach differs from swr which might be worth mentioning.
SWR is a lot more flexible and easier to adopt because you can use it with any backend. The approach we've taken, especially the way we're handling authentication, requires you to also run a WunderGraph backend that provides the API we're expecting.
E.g. if you're using the WunderGraph client, we're expecting that the backend is a OpenID Connect Relying Party. The swr client on the other hand doesn't make such assumptions.
I personally believe that with a library like swr, you'll eventually end up with a similar outcome as if you were using the WunderGraph client in the first place. It's just that you're now maintaining more code as you had to add authentication logic.
The other big difference is server-side rendering. WunderGraph is carefully designed to remove any unnecessary flickering when loading an application that requires authentication.
The docs from swr explain that this is not a problem and users are ok with loading spinners in dashboards.
I think we can do better than that. I know of SaaS dashboards that take 15 or more seconds to load all components including content. Over this period of time, the user interface is not usable at all, because it keeps "wiggling" all the content into the right place.
Why can't we pre-render the whole dashboard and then re-hydrate the client? If the HTML is rendered in the correct way, links should be clickable even before the JavaScript client is loaded. If your whole "backend" fits into the "/api" directory of your NextJS application, your best choice is probably to use the "swr" library. Combined with NextAuthJS, this can make for a very good combination.
If you're instead building dedicated services to implement APIs, a "backend-for-frontend" approach, like the one we're proposing with WunderGraph, could be a better choice as we're able to move a lot of repetitive logout out of your services and into the middleware.
Speaking of NextAuthJS, why not just add authentication directly into your NextJS application?
The library is designed to solve exactly this problem, adding authentication to your NextJS application with minimal effort. From a technical perspective, NextAuthJS follows similar patterns as WunderGraph. There's just a few differences in terms of the overall architecture.
If you're building an application will never scale beyond a single website, you can probably use NextAuthJS. However, if you're planning to use multiple websites, cli tools, native apps, or even connect a backend, you're better off using a different approach.
Let me explain why.
The way NextAuthJS is implemented is that it's actually becoming the "Issuer" of the authentication flow. That said, it's not an OpenID Connect compliant Issuer, it's a custom implementation. So, while it's easy to get started, you're actually adding a lot of technical debt at the beginning.
Let's say you'd like to add another dashboard, or a cli tool or connect a backend to your APIs. If you were using an OpenID Connect compliant Issuer, there's already a flow implemented for various different scenarios.
Additionally, this OpenID Connect provider is only loosely coupled to your NextJS application.Making your application itself the issuer means that you have to re-deploy and modify your "frontend" application, whenever you want to modify the authentication flow.
You'll also not be able to use standardized authentication flows like code-flow with pkce, or the device flow. Authentication should be handled outside the application itself.
We've recently announced our partnership with Cloud IAM, which makes setting up an OpenID Connect Provider with WunderGraph as the Relying Party a matter of minutes. I hope that we're making it easy enough for you so you don't have to build your own authentication flows.
The data-fetching layer and hooks is actually very much the same as WunderGraph. I think that we're even using the same approach for server-side rendering in NextJS.
The trpc has obviously very little to do with GraphQL, compared to WunderGraph. It's story around authentication is also not as complete as WunderGraph.
That said, I think that Alex has done a great job of building trpc. It's less opinionated than WunderGraph, which makes it a great choice for different scenarios.
From my understanding, trpc works best when both backend and frontend use TypeScript. WunderGraph takes a different path. The common middle ground to define the contract between client and server is JSON-RPC, defined using JSON Schema. Instead of simply importing the server types into the client, you have to go through a code-generation process with WunderGraph.
This means, the setup is a bit more complex, but we're able to not just support TypeScript as a target environment, but any other language or runtime that supports JSON over HTTP.
There are many other GraphQL clients, like Apollo Client, urql and graphql-request. What all of them have in common is that they don't usually use JSON-RPC as the transport.
I've probably written this in multiple blog posts before, but sending read requests over HTTP POST just breaks the internet. If you're not changing GraphQL Operations, like 99% of all applications who use a compile/transpile step, why use a GraphQL client that does this?
Clients, Browsers, Cache-Servers, Proxies and CDNs, they all understand Cache-Control headers and ETags.
The popular NextJS data fetching client "swr" has its name for a reason, because swr stands for "stale while revalidate", which is nothing else but the pattern leveraging ETags for efficient cache invalidation.
GraphQL is a great abstraction to define data dependencies. But when it comes to deploying web scale applications, we should be leveraging the existing infrastructure of the web.
What this means is this: GraphQL is great during development, but in production, we should be leveraging the principles of REST as much as we can.
Building good data-fetching hooks for NextJS and React in general is a challenge. We've also discussed that we're arriving at somewhat different solutions if we're taking authentication into account from the very beginning.
I personally believe that adding authentication right into the API layer on both ends, backend and frontend, makes for a much cleaner approach. Another aspect to think about is where to put the authentication logic. Ideally, you're not implementing it yourself but can rely on a proper implementation.
Combining OpenID Connect as the Issuer with a Relying Party in your backend-for-frontend (BFF) is a great way of keeping things decoupled but still very controllable.
Our BFF is still creating and validating cookies, but it's not the source of truth. We're always delegating to Keycloak.
What's nice about this setup is that you can easily swap Keycloak for another implementation, that's the beauty of relying on interfaces instead of concrete implementations.
Finally, I hope that I'm able to convince you that more (SaaS) dashboards should adopt server-side rendering. NextJS and WunderGraph make it so easy to implement, it's worth a try.
Once again, if you're interested to play around with a demo, here's the repository:
https://github.com/wundergraph/wundergraph-demo
We're currently working hard to make get our open-source release out of the door. Please join our Discord to stay up to date with the progress.
For the future, we're planning to expand NextJS support even further. We'd like to build great support for Static Site Generation (SSG) as well as Incremental Static Regeneration (ISR).
On the GraphQL side of things, we want to add support for Federations in a way that is very similar to the Relay client. I believe that data dependencies should be declared close to where the data is actually used. GraphQL Fragments also allow for all sorts of optimizations, e.g. applying different fetching or caching rules, like defer and stream, on a per-fragment basis.
GraphQL is great in that it allows you to define exactly what data you need, but if you stop there, you're not really leveraging the full potential of the Query Language. It's fragments that allow you to define data dependencies together with rules.
If you're as excited about this topic as we are, maybe consider joining us and helping us build a better API developer experience.
Applying for a job at WunderGraph is a bit different from what you might expect. You cannot directly apply for a job at WunderGraph, we'll contact you directly if we think you're a good fit.
We're aware that we are just humans and don't know everything.
We also have to be very careful where and how to spend our resources. You're probably a lot smarter than we are in some ways. We value great communication skills and a humble attitude.
Show us where we can improve in a genuine way, and we'll definitely get in touch with you.
Previously published here.