Developer advocate at Bearer.sh - helping developers using APIs 🧸
What's a gateway? It is an entry point. Things go in. People, traffic, requests. If you've spent any time with microservices, you may have come across the term "API gateway".
While not unique to microservices, the API gateway's popularity has grown in the time since their rise. So what exactly is an API gateway?
API gateways are a layer that sits between the client and the services it relies on. Sometimes called a "reverse proxy", they act as a single point of entry from the client to its services.
They are the reception desk at the front of an office building. Routing calls, stopping unexpected visitors and making sure parcels get to the right place.
If you've used a third-party API in the past, it is possible that you were communicating with a gateway, which in turn communicated with the service's internal API.
As we'll discuss in the benefits portion below, this allows providers to expose portions of their API to the outside world and handle versioning, security, regional localization, and more in a central place. Think Google exposing APIs for calendars, or Twitter providing versions of their timeline API externally.
The most common use case of an API gateway is routing. It goes something like this:
The client can be a lot of things. In the original implementation mentioned above the client was a customer of the API provider, but API gateways can also expose internal APIs to your own clients. In many modern web applications "the client" is a single page application (SPA), but it can also be a web application's backend server, a native mobile app, or even smart TVs, media players, and IoT devices. Whether the clients are ones you control or ones owned by your customers, the gateway manages and exposes the API surface.
The services are often internal services that your application controls, like a database or microservice. Gateways can also stand between any third-party API and your application's client. This lets your clients access third-party data in the same way they access your internal services.
Rather than forcing clients to know the details of how each API or service works, the gateway exposes a single, unified API that the client can interact with. Clients can be developed independently of any changes that may happen on the services side. Services can also be swapped in and out, if business needs change, as long as the new service is mapped onto the existing interface. For example, if the application uses a third-party API for user authentication and wants to change to an internal service, the clients won't be affected as long as the new service maps to the implementation.
There are two main variations of API Gateways. Traditional and "backends for frontends." Both serve the same purpose but are implemented differently.
Traditional API gateways handle requests from all the application's clients. For example, a gateway for a streaming video service will all handle requests from the web, televisions, phones, and tablets.
Don't mistake this variation for simple. In many cases, it will serve a unique API to each client type depending on their needs. For example, a voice interface may not require the full data that a traditional web interface does. The resulting client API will be leaner by result. GraphQL attempts to tackle this same problem, but instead gives clients control over the how much or little data they want.
The "backends for frontends" variation of the same streaming service gateway sets up individual API gateways for each client. Rather than one large gateway that routes client requests, each smaller gateway interacts with all the necessary services independently of the other client gateways.
Earlier I mentioned that the core use case is routing. This and many benefits center around abstracting implementation details away from the clients themselves and to keep them in one place–the gateway. Gateways can handle a variety of shared tasks such as:
It seems like API gateways are an easy choice based on the benefits, but there are drawbacks.
As with any addition to your stack, API Gateways introduces another piece to manage. They need to be hosted, scaled and managed just like the rest of your software. Since all requests and responses must pass through the gateway, they add an additional point of failure and increase the latency of each call by adding a few extra "hops" across the network.
Due to their centralized location, it becomes easy to gradually increase the complexity inside the gateway until it becomes a "black box" of code. This makes maintaining the code harder.
This "put it all together" approach goes against the core idea of using microservices to split an application up into smaller parts and removes some of their autonomy.
These problems are mostly avoidable, but it takes a bit of work.
Gateways let clients access services, but what happens when services need to talk to one another? That's where service mesh comes in. A service mesh is a layer focused on service to service communication. You'll see gateway communication described as North-South(from clients to the gateway) and service mesh communication described as East-West(between services).
Traditionally it made sense to use a service mesh and API gateway together. The gateway would be the entry point for your client's requests, and then the service mesh would allow your services to rely on one another before passing responses back through the gateway. One popular API gateway, Kong, released an open source mesh to pair with their gateway product.
Over the last few years, service meshes have expanded their functionality to handle external communication. One popular mesh, Istio, now includes some gateway functionality. It is expected that over time, many service mesh products will take on many of the core features of gateways.
We mentioned earlier that access to third-party APIs can also live behind your gateway. This works great for situations where you are directly consuming a third-party API from the client, but don't want to introduce another dependency and a new API surface.
The downside? Now you need to handle the unavoidable outages and downtimes that will occur when relying on a third-party. If an active monitoring tool, like Bearer, notices a problem with an existing API it can respond directly to the gateway. Either by swapping over to an alternate resource, serving cached data, retrying, or any number of other resiliency measures.