Nowadays everyone talks about microservices (fine grained service oriented architecture). Some of them like Martin Fowler suggests to go for a monolith first approach. One of the benefits is avoiding chaty-ness of boundaries. Hence you can ship early and cut off your domains later on one by one.
There is no silver bullet. Different macro architectures have different trade-offs. There are also other aspects that matter for side projects. For example to get practical experience with a special kind of architecture. I usually check if an architecture of my interest fits well to my project. Hence I hit two birds with one stone. There are a lot of great lectures and talks about microservices. But as soon as you start using them you realize what really matters. Only a few talks and experience reports go into details about the bad and the ugly parts of their architecture. Companies spent millions to get it done. Marketing, reputation and hiring people are the reason to keep those secrets. But as soon as you drink the 4th beer with an experienced developer you get real insights. For instance a honest answer when you ask:
Would you go for the same architecture again if you could just travel back in time keeping the experience you have right now?
Before I go into detail of my project I will define microservice based architecture. Afterwards I talk about disadvantages. If you want to hear pros for microservices go to youtube, get a book or google for it.
In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. (M. Fowler)
I will now introduce my common server setup. Followed by the description of my project. Afterwards I will sketch the complexity growth of my services and the experience I made. And in the end I will summarize the bad things and try to find conclusions of my learnings.
Usually I rent virtual servers from “oldschool” hosting provides. Right now I am happy with Netcup. It’s a german hosting provider. Their virtual servers base upon KVM. Therefore running docker is possible like on bare metal.
For 1 year I am a fan of cloud solutions like Amazon Web Services (AWS). I can add things like caches or load balancer to applications. This saves time and money. Most notably it reduces the complexity of applications. Cloud solutions come with a drawback, their pricing. You will pay 200% until 400% more in comparison to a regular hoster. High peeks related to the daytime allows scaling on demand. Then you can maybe save money, maybe…
In an early state I put everything I need for a side project on one virtual server. Hence other people who are part of the project are not able to harm other projects. Additionally each project can scale individually.
As soon as one projects gets bigger and requires more resources I add more servers. Separating the production environment from my ci-system and testing environment allows better scaling and increases security. Furthermore I move the tools for team distributed work to an autonomous machine.
For this project I start with the following setup:
This setup costs about 8 € and can handle all infrastructure parts. It serves up to 1000 Users simultaneously for simple applications.
One side project of mine is an online platform for people who love playing multiplayer games. There you can create and attend in game races and the best players can win prizes.
There is a frontend monolith implemented with React. Additionally there are multiple backend microservices separated by their domain:
In the beginning I started with a races service to manage competitions in general. This services has a couple of endpoints to create, read, update and delete races. Participants are able to join when the races has not started. Related to the state of the race (upcoming, running, finished) a various set of other constraints exists.
My initial setup was:
First CD pipeline
In the beginning this was quiet simple. One frontend, one backend, a quick build and automated tests. Adding features was a charm compared to a big enterprise application. The whole CI-pipeline run about 15 minutes and a new version was live.
I always try to test everything to have a good feeling as soon as I push code. Therefore I added functional tests that fires Rest requests to my service. There is already a task that dockerizes my services. For testing I start all services using another container name prefix. Additionally I change the ports to non production ports. As soon as all instances are up and running I can run my tests. These fires requests to my application and expect certain responses.
To make sure that the whole application works I added CasperJs based end to end tests. They help to make sure that the frontend works. Furthermore it reveals whether the backend communication operates correctly. I only test the happy path of creating a race.
At this point a full pipeline execution required about 20 minutes.
As I added more and more features complexity grew. The second services was the Stats service. First I copied the existing Spring based service, changed the port and added new CI steps. Coming from 20 minutes CI runtime I was already up to 30 minutes. I copied the build, test, rest integration and end-to-end test tasks. Furthermore besides the admin frontend I added a user frontend to access races.
At this point each service has hardcoded names and ports of other services. At least it was partly dynamically using environment variables:
requestedServicePort = basePort + requestedServicePortOffset
Hence test systems starts separately. Therefore I can deploy my application with no downtime. This works by changing the port redirection as soon as the new application is up and running. At this point the application survived the first user tests successfully. Accordingly my team decided to go further.
Time went by and I added the profile service to persist global user statistics. Now the rule of three strikes in. Instead of wire all those services together I went for a more scaleable and flexible solution. I use a service discovery called Eureka. I needed insights about availability and healthyness of my services. A Spring based admin dashboard communicates with Eureka and gives me all the data I need.
On my machine it worked very well. It felt like a great system. No static links, no additionally configuration. Each service only registers itself at Eureka and asks Eureka for other service’s IP and port. For client side load balancing I use Ribbon. Hence the load can spread between multiple service instances. To achieve fault tolerance I also use Hystrix.
After adding all these services my server worked up sweat. CI-tasks started to fail sometimes. Deployments took a lot of time. The machine runs out of memory and swap.
At this state I would have to invest more time and money to scale vertically and horizontally.
The easiest solution would be to get a more expensive virtual server. Anyways, I started to think about what slows me down:
All these downsides sounds harsh. I am a microservice fan but as I said, this thread will only highlight the bad things.
Summing pros and cons up I decided to migrate to a more monolithic like backend. The target now was to put the race, stats, profile service into a single one. Additionally I could get rid of the admin and eureka service and reduce service communication.
It took me about 6 hours. Having different test layers saved me so much time. My first target was, after copying the tests to the “monolith”, to make the tests work again. After I fixed all Java based tests I did the same with the Rest and CasperJs tests.
I also tidied the CI pipeline a bit up. I could get ride of t he building task. An application build was also part of the testing stage. In the and I was back to 15 minutes build time having a lot of tests and a very safe deployment process.
Afterwards the code looked a lot better. I used domain driven design for my core concepts. The services I had before the refactoring now exists in one service within different package structures. This allows me to check the dependencies between packages later on. Therefore this enables me to cut off microservices again very quickly as soon as I need better horizontal scalability.
So my learnings for future projects are:
Software development is only about trade offs. Before developers commit on a new technology they should get a feeling about the pros and cons. For example by playing around with them. The down sides of microservices are dramatically and can be the reason why projects fail. If handled well a microservice architecture can be a reasonable decision. It accelerates the development process and gives developers more freedom.
The biggest benefit comes for companies that have many development teams for a certain project. Being able to develop and deploy a service independently increases ownership. That means the developers feel more responsible for what and how they implement requirements. Additionally they are able to escort a feature from dev, to test, to production. In combination with for instance canary releases production bugs can be heavily reduced. Scalability is also a characteristic of microservices but in comparison relatively irrelevant.
I hope you enjoyed the reading. Feel free to send me feedback and questions using the platform of your choose. I appreciate any follower on Twitter because this is my main communication tool to you.