The microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. Credit Martin Fowler
If you are new to microservices please read Martin Fowler’s article in its entirety.
A stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks. All the resources in a stack are defined by the stack’s AWS CloudFormation template.
The services stack is one example of how you can architect, develop, and deploy microservices on AWS. Specifically using AWS Virtual Private Cloud (VPC), EC2 Container Registry (ECR), and EC2 Container Service (ECS).
Here is what the services stack architecture looks like.
If you are ready to deploy this stack and start building microservices then click here.
Lets learn more about the components that make up this stack.
The first building block is the AWS VPC. Think of the VPC as a security and isolation layer that everything else we deploy lives inside. Within the VPC we have public subnets and private subnets.
For the most part we put everything important (apps, databases, etc…) in the private subnets. Then we put all communication resources (ALB’s, Gateway’s) in the public subnets. Communication resources being things that bridge the gap between our hidden services and the internet.
For example our service goes in the private subnets but our Application Load Balancer (ALB) goes in the public subnets. When inbound requests are made to our service they go through the ALB. The internet can talk to our ALB but can’t talk directly to our service. Only the ALB can talk to our service.
Outbound requests from or service to the internet go through the NAT Gateway’s, which are also in the public subnets.
AWS ECR is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.
Think of the container registry as a repository of your build artifact, with versioning, which are ready to run. You simply tell ECS to deploy this version of this artifact. We need an ECR repo for each of our services.
Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resource-isolated processes. Containers allow you to easily package an application’s code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. Read more about containers here.
Docker is the software that helps you develop, package, and deploy your containers. Read more about Docker containers here.
AWS ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.
One or more EC2 instances can run many Docker container services. So you could have one EC2 instance running many services.
Container services make it easier for you to develop, deploy, and scale Docker containers across a set of EC2 instances.
AWS ALB operates at the request level (layer 7), routing traffic to targets (containers) based on the content of the request.
In our case the ALB sits inside the public subnets and routes HTTP traffic to our service based on the URI. The ALB is also smart enough to know which port in each EC2 instance our service is running on so it can route traffic accordingly.
A common setup is an ALB with the following routing rules:
Finally the code. Our service is a simple web application that responds to HTTP requests. It can be in any language or framework. The beauty of this architecture is that the implementation details are not needed by the compute layer. They are encapsulated in the container definition, the Dockerfile.
To deploy this stack we use CIM.
Before building anything on AWS I always start with CloudFormation. IaC is a must for all projects built on AWS. Plus it’s really fun and makes you feel like a true architecture. CloudFormation had a few pain points. That’s why I built CIM.
CIM is a simple command line utility that bootstraps your CloudFormation CRUD operations, making them easier to execute, repeatable, and less error-prone.
If you want to learn more about CIM, and why I built it, you can read my article, Meet CIM — Cloud Infrastructure Manager.
Thanks for reading about the Services Stack. I hope you enjoyed it.