Many software companies decide to build their applications in the distributed way. This approach gives high availability, and in the long run increases the development speed. The most common architectural pattern to design distributed systems are Microservices. The Microservice architecture is about splitting application into small standalone services, each of which plays a separate role in the overall system.
Imagine your client wants to build a job bulletin. The product must allow users to place job offers, and charge them a small listing fee. It also has to send several kind of e-mail notifications. Actually, it isn’t a single application, but rather a webpage, and iPhone and Android app. The first thing that needs to be done is to decide what architecture we will choose.
Benefits of Monolithic architecture
A Monolithic approach gives a boost a the beginning of the project. There is no need to define complex integration testing, and deployment processes. Everything is trivialised to a single service.
- Single codebase — simple to develop (at least in the beginning).
- No need to verify integration with other services since they don’t exist.
- Relatively easy deployment process — only one service needs to be deployed.
- Single point of failure — if one app is down, the whole service is down.
- Maintenance complexity grows with time.
- Hard to adapt new technologies and techniques. Usually it requires rewriting the application from scratch.
Benefits of Microservices
Microservices have many advantages over monolithic architecture (where a single application takes responsibility for the whole system). The most important ones are:
- No single point of failure. When one service is down users can still use the application. Other services are still working.
- Individual Microservices can be scaled up to increase their capacity and availability.
- Security — most of the services aren’t exposed to the Internet.
- Every service can be deployed independently of other services. Providing they don’t break the API contracts.
- Lots of work need to be invested to setup the foundation.
- Deployment processes are more complex. Some deployments may require deploying more than one service.
- Tons of work on DevOps. Especially with deployments…
- Dealing with integration testing.
Most large applications are developed by hundreds of developers. That’s why splitting an application into smaller services makes work easier and faster. The Microservices architecture is a good choice for medium and large applications, although the most successful startups began their applications in monolithic architecture, and migrated to Microservices later on.
Designing Microservices for the Job Posting application
Let’s consider the following architecture for the job bulletin. The website and mobile apps communicate with services through the API Gateway. The API Gateway is the only way to access the services from the Internet. The services communicate with each other inside the internal network.
Imagine a user creates a new account. They enter the website, fill in the form, and send the request to the API Gateway. The API Gateway delivers this request to the User Service that creates a new account. After creating an account, User Service sends a request to the E-Mail Service to send an activation e-mail. An e-mail is then sent with a link that the user needs to click to activate their account.
As you noticed, the E-mail service isn’t available over the API Gateway. This is because only internal services can trigger e-mail notifications.
How to deal with testing?
Microservices always run into testing problem — How to confirm all services are working together? Dealing with testing of the monolith apps is straight forward. The app has a single code base and doesn’t rely on any external services. In contrast Microservices make all services distributed. they rely on information coming from others meaning the system architects must find way to verify that services are speaking in the same language. Often, they decide to build an integration environment, where services are spun up, and testers run tests. Unfortunately this approach is very inefficient and expensive. In our case we need to run six services in parallel. Please have in mind that some of those services require databases or other systems running.
Despite the tons of hardware needed to spin up the whole application stack, consider a testing hell. Let’s assume that budget isn’t a problem for your client. You have the necessary hardware, and a few DevOps guys to build and maintain the integration environment. Running all the tests takes so much time…
…Sometimes hours, seriously…
Some of our services look run cron and batch jobs. You can imagine it is nearly impossible to predict the time needed to execute them, meaning tests that rely on those jobs might be flaky.
What happens when your test relies on data processed by cron job, and it will try to confirm the test before the cron job has been finished? Your test fails, and you need to re-run your tests again…
…after a few attempts you get your build green. You wasted 3 hours on it, but now you can merge your changes, and deliver a feature…
Yay! Life is good!
…Unfortunately someone else merged something down two minutes ago, and so you need to merge those changes and run all the tests again…
Instead of running integration tests between all the services — get rid of them. All services communicate through RESTful APIs. That means that if we define a tight “contract” between APIs we don’t need to spin up the whole platform. It is enough to verify the requirements fulfilment of other services.
Example code can be found on Github.
How it works?
There are two peers— a consumer (client) and a provider (service). As a developer we want to confirm they are compatible with each other; that’s why we define an API contract on the consumer side. The contract must be enforced on the service provider.
What tools to use?
You might also want to consider Dredd.
What is a consumer?
A consumer is a client that wants to receive some data from other services (for example, a web front end, or a message receiving endpoint). They define requirements towards the endpoint such as HTTP headers, status code, payload, and response. The contracts are generated during the unit test runtime. After all tests succeed pact creates json files containing information on HTTP requests. This is the example contract:
What is a provider?
A provider is a service or server that provides the data (for example, an API on a server that provides the data the client needs, or the service that sends messages). A tool for verifying contracts towards provider is called Pact Provider Verifier. The verifier runs HTTP requests base on the contracts created by consumer. If the server response is in the form expected by consumer, the tests passes.
If provider doesn’t meet the expectations they fail…
Hold on a second… so, how to deliver contracts to all peers?
After all tests on the consumer side succeed, the json files containing contracts are created. Our job is to deliver them to the providers to verify the contract. There are several ways of doing this:
- a Git repository for storing pacts, and including them into each project with a git submodule. In my opinion the best way of doing it,
- a file system.
- the Pact Broker.
The Pact Broker is an application for sharing for consumer driven contracts and verification results. We can push our contracts there and allow the service providers to download them, and run tests against them.
DiUS provides a cloud version of the Pact Broker so, maybe you can take a look and use it. I don’t see too many advantages in running your own Pact Broker. In the end it is yet another service which requires maintenance from DevOps. If you want to run by yourself here are the Docker images.
I think storing contracts in a separate git repository is good enough. Personally i would create them during the integration pipeline, and run tests on the service providers within the same integration job— example integration pipeline here.
Why not Swagger?
Swagger is a definition format for documenting APIs. It creates interfaces for developing and consuming an API by mapping all the resources and assiociated operations. The output is understandable by people (through websites) and machines (through yaml, and json files). Unfortunately Swagger wasn’t meant to be used for testing. The mock servers generated by Swagger don’t validate request payloads, the validation is handled by the front-end.
It doesn’t mean you shouldn’t use Swagger at all, you can, and it is highly recommended. If you work with Microservics it is very likely that they are developed by many teams, and used by other teams (sometimes, external teams). Having documentation linked to your APIs makes development faster, because developers don’t need to figure out how to use endpoints. They can read it and try out on the documentation page.
You might want to read more about Swagger Mock Validator; a tool which allows you to verify contracts against the Swagger files.
Testing Microservices is a complex problem. There is no magic bullet, neither there is a set of rules that can be easily used in any situation. I wrote this article to show you how a team i work for decided to deal with this problem.
I decided to keep all the code on my Github, so the article is easier to read.
If you have any problems or suggestions, please write a comment.