For the last 10 years, I have been building RESTful services as the feature team member on projects assigned to me via consulting opportunities or as a full-time corporate employee. This represents one-third of my career and is what I have enjoyed the most.
However, in all of those years, when the system is part of an application modernization initiative, I feel like I'm learning the same lessons over and over:
Each one of these lessons could be the focal point of a publication geared toward doing application modernization correctly. Rather, for this publication, I am going to focus on supporting applications that are successfully running – despite failures to the five lessons learned (above).
Years before I took computer programming seriously, there was a song called “The World In Which We Live” by a new-wave band called Wang Chung. The adverse lyrics in that song speak of a world of consequence and the result of our own actions. Many times as a consumer of APIs, I feel the core of that song racing through my veins as I try to navigate non-standard APIs and unexpected experiences.
Some of the pain points which tend to make my fingertips numb:
In fact, many service-based applications today follow a design similar to what is illustrated below:
In this example, each service has implemented six common components at the service layer. This leads to duplication which must be managed manually – especially when the underlying source code (as shown above) utilizes different languages and frameworks.
As a feature developer, I strive for ways that I can be more productive. I seek ways that I can maximize the amount of time I allocate to meeting acceptance criteria and making business rules lead to successful product enhancements. Most of all, I want to avoid my fingertips becoming numb at my age.
What if the illustration above was refactored and consolidated as shown below?
In the example above, all of the duplicate components are consolidated into a distributed microservice abstraction layer, which is commonly referred to as an API gateway. In fact, I discovered this very design with Kong’s Cloud-Native API Gateway product – also known as “Kong Gateway”.
The Kong Gateway product allows the complexity of my service-tier APIs to be reduced to a collection of endpoints (or URIs) focused on meeting a collection of business needs and functionality. Often duplicated components (like authentication, logging, and security) are handled by the gateway and can be removed from the service-tier design.
In addition to the common components shown in the original illustration, Kong Gateway offers additional functionality:
The best part of Kong Gateway is that it is a cloud-native (platform agnostic) open-source software (OSS) solution, which can be utilized pretty much anywhere. There are also no licensing costs while utilizing the OSS product.
Taking things to a broader level, Kuma is another platform agnostic-OSS solution for service mesh and microservice management – with control plane support of Kubernetes, virtual machines (VM), and even bare-metal environments. Kuma was donated to the Cloud Native Computing Foundation (CNCF) by Kong and still actively contributes to the evolving codebase.
While Kong Gateway is a separate layer to sit between the requestor and the services, Kuma employs a “sidecar” pattern – similar to a sidecar on a motorcycle. However, rather than providing extra space for a passenger, this type of sidecar attaches to individual containers – thus forming a “mesh” instead of a separate layer.
Kuma leverages Envoy – an open source edge and service proxy – in order to visualize any program areas via consistent observability. In addition to an advanced user interface, Kong Kuma includes three key features:
With Kuma, distributed environments can take advantage of the core Kong Gateway features and functionality while also including aspects such as:
For those organizations using Kubernetes for their container orchestration, Kong created the Kong Ingress Controller which implements authentication, transformations, and other functionalities (via plugins) across Kubernetes clusters.
Kong Ingress Controller updates a standard Kubernetes implementation as shown below:
With the Kong Ingress Controller in place, the features noted in the Kong Gateway product are accessible via the plugin architecture. Six plugins are depicted in the example above.
I wanted to take Kong Gateway (OSS) for a test drive, so I used Spring Boot to create a very simple URI:
The data behind this URI will be static and created when the service starts. Within a few minutes the service was available on port 8888 of my local machine:
To keep things simple, I decided to run Kong Gateway using my MacBook Pro system and a PostgreSQL database within Docker.
Using the Homebrew package manager, I installed Kong Gateway with a couple of commands:
╭─john.vester@jvc ~/projects/jvc/kong
╰─$ brew tap kong/kong
╭─john.vester@jvc ~/projects/jvc/kong
╰─$ brew install kong
Once completed, the following command was executed to validate Kong Gateway version 2.4.0 was installed correctly:
╭─john.vester@jvc ~/projects/jvc/kong
╰─$ kong version
2.4.0
To get PostgreSQL running via Docker as my database, so I pulled down the latest version of Postgres from Docker Hub:
╭─john.vester@jvc ~/projects/jvc/kong
╰─$ docker pull postgres
Once the Docker images were ready, PostgreSQL was started in Docker:
╭─john.vester@jvc ~/projects/jvc/kong
╰─$ docker run --name postgres -e POSTGRES_PASSWORD=some-password -d -p 5432:5432 postgres
763a9303b586ea8953717ea6c68fa04437301fe367a5a85b43d5d1fa8523fba6
With the database running, elements for the Kong Gateway were added to the running instance:
CREATE USER kong;
CREATE DATABASE kong OWNER kong;
The last step of database preparation is to execute the following command:
╭─john.vester@jvc ~/projects/jvc/kong
╰─$ kong migrations bootstrap -c /etc/kong/kong.conf
...
41 migrations processed
41 executed
Database is up-to-date
After the database migrations finished, Kong Gateway was ready to start:
╭─john.vester@jvc ~/projects/jvc/kong
╰─$ kong start -c /etc/kong/kong.conf
Kong started
With Kong Gateway set up and ready to go, the next step is to focus on configuring the Spring Boot URI noted above. The first step is to configure the Spring Boot RESTful service as “account-service” using the following cURL:
curl --location --request POST 'http://localhost:8001/services' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'name=account-service' \
--data-urlencode 'url=http://localhost:8888/accounts'
This leads to the following response – referencing configuration data is stored in Postgres:
{
"connect_timeout": 60000,
"path": "/accounts",
"read_timeout": 60000,
"name": "account-service",
"write_timeout": 60000,
"created_at": 1618933968,
"updated_at": 1618933968,
"tls_verify": null,
"id": "7ba5d84c-0b4d-454a-83b3-5381d4e52c61",
"tls_verify_depth": null,
"retries": 5,
"tags": null,
"ca_certificates": null,
"port": 8888,
"client_certificate": null,
"host": "localhost",
"protocol": "http"
}
A route is created next for a host called “account-service” which will be referenced whence the Spring Boot service is called:
curl --location --request POST 'http://localhost:8001/services/account-service/routes' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'hosts[]=account-service'
The result of the cURL command generates the following JSON response from Kong Gateway:
{
"headers": null,
"name": null,
"hosts": [
"account-service"
],
"created_at": 1618936295,
"path_handling": "v0",
"id": "d70a3bf2-4a82-4ec5-a644-90806c53f5b5",
"protocols": [
"http",
"https"
],
"paths": null,
"request_buffering": true,
"response_buffering": true,
"destinations": null,
"methods": null,
"https_redirect_status_code": 426,
"preserve_host": false,
"strip_path": true,
"regex_priority": 0,
"updated_at": 1618936295,
"snis": null,
"sources": null,
"service": {
"id": "7ba5d84c-0b4d-454a-83b3-5381d4e52c61"
},
"tags": null
}
At this point the “account-service” route can be retrieved via Kong Gateway using the following cURL:
curl --location --request GET 'http://localhost:8000/' \
--header 'Host: account-service'
Which returns the expected JSON data from the Spring Boot service:
[
{
"id": 1,
"name": "Eric"
},
{
"id": 2,
"name": "Finn"
},
{
"id": 3,
"name": "Nicole"
},
{
"id": 4,
"name": "John"
},
{
"id": 5,
"name": "Sydney"
}
]
Success! This is the exact same data when hitting Spring Boot directly but passed through the Kong Gateway.
Next, I implemented the Rate Limiting plugin into the Kong Gateway, using the following cURL:
curl -X POST http://localhost:8001/services/account-service/plugins \
--data "name=rate-limiting" \
--data "config.second=1" \
--data "config.minute=3" \
--data "config.policy=local"
This configuration, although limiting, only allows one request to the account service per second with a maximum of three requests per minute.
The submission of this POST yields the following response payload:
{
"route": null,
"tags": null,
"name": "rate-limiting",
"config": {
"year": null,
"path": null,
"limit_by": "consumer",
"hide_client_headers": false,
"second": 1,
"minute": 3,
"redis_timeout": 2000,
"redis_database": 0,
"redis_host": null,
"redis_port": 6379,
"policy": "local",
"hour": null,
"header_name": null,
"redis_password": null,
"fault_tolerant": true,
"day": null,
"month": null
},
"protocols": [
"grpc",
"grpcs",
"http",
"https"
],
"created_at": 1618947648,
"service": {
"id": "7ba5d84c-0b4d-454a-83b3-5381d4e52c61"
},
"consumer": null,
"id": "a3d8532a-0464-4117-bbfd-716300966fe7",
"enabled": true
}
Now, when multiple calls are made to the following URL:
curl --location --request GET 'http://localhost:8000/' \
--header 'Host: account-service'
The Kong Gateway throws a 429 (Too Many Requests) HTTP response with the following payload:
{
"message": "API rate limit exceeded"
}
Using the open-source version of Kong Gateway, the following plugins could be easily added following the same pattern noted above:
If I were to draft a concise mission statement for any IT professional it would be quite simple:
Focus your time on delivering features/functionality that extends intellectual property value. Leverage frameworks, products, and services for everything else.
Kong provides products and services to not only avoid service-tier duplication but in many cases abstract common components away from the mind of the feature developer. The result of this approach allows a code-base that is lean and focused on meeting acceptance criteria.
Kong follows this same mission statement themselves, allowing aspects from Kong Gateway to be implemented as a plugin in the Kong Ingress Controller product. As a result, components are configured once and leveraged everywhere.
Kong employs a platform-agnostic approach, allowing legacy applications to utilize the same services for short or long-term periods of time. As a result, supporting the pain points noted in the introduction of this publication becomes less of an issue.
Have a really great day!
Also published at https://dzone.com/articles/how-i-stopped-coding-repetitive-service-components