Anyone can make a tiny service. It’s the first thing you do when you make your “Hello World” express server. If you’ve made a “Hello World” express server before, then congratulations! You’ve made a microservice!
It’s going from one, to many, that the questions start piling up.
How should they talk to each other? REST? RPC? Messaging? And why choose one over another?
Let’s say that you do manage to wire up a small system of a few services and you chose REST cause it seemed easiest, and you already knew it. Time to deploy them somewhere. If you’ve tried running microservices on a Platform as a Service (PaaS) like Modulus, or Heroku, this can get pretty time consuming, and pretty expensive.
Not to mention latency from communicating over the internet instead of a private network. That’s a no go. Oh, and you’ll need to provide environment variables so you’ll know what endpoint to ping.
Then you think about deployments. Deployments are always a pain, and now you need to do it seven times?!
The above scenario is basically what my first attempt looked like. I was clever though, I used a unidirectional data flow! Too bad that’s only one of the pieces.
What about logging? And monitoring? Where do you store secrets? Auto-scaling?
You finally realize that you now, on top of just wanting to make tiny services, need to go learn how to build your own infrastructure in AWS.
Suddenly, our tiny services don’t feel so tiny.
The following tools when combined have the potential to greatly increase the speed in which you develop services, the speed in which you deploy them, and the ease of running them in any environment. As a bonus, once they’ve been codified, they are quite reusable.
It’s more tracking down resources and learning each, as well as how they interface that makes up the majority of the learning curve.
10 Puzzle Pieces of an Effective Microservice Architecture1. Containerization
You can’t effectively build microservices without containerization. There are just too many pieces. I mean, you can, but it’s seriously going to be way more work.
Imagine if you wanted to buy some Oreo’s and they came separately. It would be, well, less convenient to say the least.
When it comes to services, containers simplify development, testing, deployments, and running in production. All the things you need to do all day every day as your job, so might as well make it easier.
Services need containers, just like Oreos.
2. A cluster
But you didn’t want just Oreos, did you? You also wanted to make grilled cheese and Tomato Soup, because it is frigid outside. And maybe some deodorant and toothpaste so you aren’t a smelly unsociable human, so you got that too. And of course, beer.
Imagine if you had to carry all of your groceries home individually. It doesn’t make sense. So, you put them in bags. You put heavy ones on the bottom, so they don’t squish the bread, and cold ones (for the bros) with other cold stuff so they can all be cold together. Some stuff you double bag, for redundancy.
Bags are like servers, you need more than one, and they hold your services.
Having more than one allows you to avoid failure scenarios, like if an availability zone went down. Or, your soup can broke and the paper bag ripped. Good thing you double bagged it! Give yourself a clap. You can use the button on the bottom. ;)
You can get your servers at any of the major cloud providers. The important thing here is codifying that infrastructure. There are a few different ways to do this. In the past many people used Ansible for this, but given the requirements of fast scalability, it doesn’t make sense to have to wait for a server to be brought up, and provisioned. Instead you can “bake” an image of the machine with all of the tooling installed, and then use that image with Terraform or Cloudformation. Once you’ve designed your cluster for resiliency, a rolling update of a new AMI will allow you to update your entire infrastructure.
Servers: AWS, Azure, Digital Ocean
Tools: Cloudformation, Terraform, Packer
Tools: Cloudformation, Terraform, Packer
3. An Orchestrator
Now, you are pretty good at bagging stuff, but, you live in New York, and you paid $82 for these grilled cheese ingredients and beer, and now you’re gonna put them in a bag yourself? Forget about it! You’re already saving money by cooking at home! There is a bagger whom bags stuff non-stop all day every day, and they’re like, really good at it!
An orchestrator is like the grocery bagger.
The orchestrator knows the memory requirements of your services, and how much CPU needs to be allocated, and she carefully places each service into the appropriate server.
Tools: Docker Swarm, Kubernetes
4. Continuous Deployment
I don’t think I can take this grocery thing any further. Something about UberEats? I don’t know. Comment with your ideas if you have some! Let’s just get the brass tax. Deployments are time consuming, and you have better things to be doing.
Automate it, automate it, automate it.
This directly contributes to the business making money. The whole Lean Startup is about build, measure, learn, and repeating that cycle. Continuous Deployment allows you to do this faster. The faster you can get things out the better, because then you can measure, and learn, and know what to build next.
I personally think you should strive to automate everything to a point where you can confidently commit to master. Git flow is too cumbersome when repeated 15 times across a bunch of tiny services. Make branches when you need, but, it’s ok to just commit to master if you have the proper safety checks in place.
Tools: Jenkins (With pipeline and Blue Ocean), TravisCI, CircleCI
Your services should be secure. A great way to secure them?
Don’t expose your services directly to the internet!
Instead, use a public facing reverse proxy. However, it needs to be one that is easily configurable, ideally, one that configures itself based on some configuration options defined on the service, so the team developing the service can decide best how to interface with the proxy without having to ssh into servers.
Tools: HAProxy, nginx, docker-flow-proxy
6. Message Queue
Services should communicate in a universal language. A Message Queue can be used as the glue that sends messages to all subscribed clients. Also, it should not be the responsibility of a service to know where on the internet you other services are running. If they move, then you’d need to know about it.
Sure, you could use a distributed key store like Consul and keep it up to date to avoid having to constantly reconfigure your service, or better still use an orchestrator’s DNS features and use REST, but, that’s still a lot of extra responsibility and configuration. This makes your services tightly coupled.
Use one-way fire and forget messages to make your services loosely coupled, and easy to scale.
Instead, using a message queue, you can simply send or publish messages, in a “fire and forget” manner. Other services can easily be added or removed without affecting the original publishing service. This makes your architecture very easily scalable as well. Also, with features like retries, and error queues, you can stress less about services crashing. The message will simply be retried when the service comes back up.
Tools: RabbitMQ, ActiveMQ, AWSMQ, ZeroMQ
7. Centralized Logging
With messages happening from across many nodes and many replicas, potentially being moved around by the orchestrator, it’s not a viable option to ssh into a server and check the logs. You need to ship them to a centralized location where they can easily be searched, and drive insights into how code is functionally in production.
See everything in one place.
Furthermore, it should not be the responsibility of a service to know how to ship logs. Instead, always log to stdout or stderr and use a tool to collect all of those logs by running it globally on each node in your cluster. The log collector tools will then ship the logs to your Centralized Logging subsystem.
Tools: ELK Stack (ElasticSearch, Logstash, Kibana)
8. Monitoring and Alerting
Similarly to logging, you don’t want to be chasing services around to see what their usage statistics are. Instead, the should all be collected and shipped to a centralized location.
Use alerting to drive scaling or recovery events before alerting humans.
You also don’t want to be chasing down that information if you don’t have to be. Ideally, you should only ever hear a peep when something is wrong. This means you need an alerting tool that can respond to the metrics collected from monitoring and alert a user. Better yet, it can alert a microservice to trigger a scaling event, or attempt to resolve the issue in other ways to avoid requiring human interaction.
Tools: Prometheus, Alertmanager, Grafana
9. A meta repository
Most people have only heard of monorepos, or multi-repos, both of which fulfill certain requirements. However, with microservices, you tend to have a lot of repositories, which can get to be a lot to handle. Using a meta-repository solves this issue by creating a parent repository and a command line tool to execute commands across many repositories at once. To learn more about meta repos, check out my article: Mono-repo or multi-repo? Why choose one, when you can have both?
Tools: meta, gitslave
10. A Microservices Mindset
An architect’s job is not done when the infrastructure is built. For the architecture to achieve it’s benefits, the team needs to embrace microservices. All of the tooling in the architecture’s goal is to remain flexible by supporting any language that is best for the job, due to the loose coupling of the message bus, and containerization. If your team just goes and builds monoliths and calls them microservices, it’s not going to work. Therefore, they need to be educated about microservice design patterns and benefits, so they can approach building with confidence, and less experimentation.
For more details on Microservice Patterns, see my other article “Learning these 5 microservice patterns will make you a better engineer”. Where I teach you how to categorize your services, and talk about the overarching patterns, and how they relate to Domain Driven Design, CQRS, and Event Sourcing.
My goal has been to give you a high level overview of the components of a microservice architecture. Being an architect is a difficult journey with a lot of tools to learn, however, leveraging the power of the architecture as a whole will give you development superpowers!
I’m currently developing a new course Microservice Driven Design, where you can learn how to build a modern microservices architecture using all of the above components. To be the first to access, sign up here!
Interested in hearing MY DevOps Journey, WITHOUT useless AWS Certifications? Read it now on HackerNoon. DevOps is a prerequisite to effective microservices!
Thanks for reading! If you found this useful, please give me some claps! It would really help me reach more people!