Ramon Blanquer

@eulersson

Independently Scalable Multi-Container Microservices Architecture on AWS Fargate (I)

January 23rd 2019
Various cafés and boutiques stacked up at The Cafe Apartments in Vietnam.
A guide on deploying a full stack (nginx, frontend, backend) application on AWS using Docker, Fargate and CloudFormation.

Few weeks ago I started my journey through AWS. Now I got to a point where I feel confident to share my journey and some ideas that might help others.

The reason I am writing this article is because I saw many examples of backend-frontend-nginx stacks in Fargate but not in a way that you could scale each component independently.

On all those examples if you wanted more backend availability you would need to scale frontend and nginx with it because all containers were defined together and would run as one big service. It was not what I was after.

This guide is structured as follows:

  • Part 1: why microservices, AWS technology, application code, basic docker commands and local development setup (in a way so you can reuse your containers both for local development and on the cloud).
  • Part 2: Design of the AWS stack and deployment using CloudFormation.

Requirements for following up:

You can find the full application code on GitHub.

Microservices

I decided to take the microservices road for my new projects for various reasons, among them:

  • Breaking a big application into smaller autonomous self-contained projects that can be individually developed and deployed.
  • Scaling specific parts of the application becomes very easy as opposed to scaling a monolithic application.
  • If one service fails it doesn’t bring the whole application down (or at least it shouldn’t 😅).
Not going to dive into microservices, there are many great articles about them.

Cloud Resources

I did my research and decided to try the Elastic Container Service (ECS), which is the Docker container orchestration service AWS provides. This service has two engines (or launch types as they call them) that can be used to manage your containers:

  • EC2: You run containers on a group of EC2 (Elastic Compute Cloud) instances that you manage. I myself haven’t played with that option but I imagine you need to chose the Images, machine specs and further configuration in regards to the provisioning of the instance.
  • Fargate: There is no need to manage and provision EC2 instances. You simply describe your containers and tell how much memory and CPU you desire for each. Amazon does the rest.

This article focuses on the latter.

Our Hello World Multi-Container App

As aforementioned I didn’t want an architecture that wouldn’t let me scale individual pieces independently, instead what I wanted is:

  • Reverse proxy server (nginx) sitting in front. Public facing.
  • Frontend. Just reachable by the clients through the nginx service.
  • Backend I could scale independently because it would handle expensive computation. Private and only accessible internally by the frontend and nginx, never from outside directly.

I prepared a barebones setup for each of three service components so we can focus on the architecture rather than the application itself.

I shall remark that the application code is not production ready, it’s just the very basic for demonstration purposes.

I called the project ECSFS (short for Elastic Container Service Full-Stack) and its folder structure is as follows:

backend/
Dockerfile
app.py
frontend/
Dockerfile
packages.json
server.js
nginx/
Dockerfile
default.conf
docker-compose.yaml
stack.yaml

Our backend (app.py) is a Flask application that simulates an expensive computation and returns it in a formatted string.

import time
import random

from flask import Flask

app = Flask(__name__)

@app.route('/')
def greet():
interval = time.time() + 1

# Simulates some CPU load.
while time.time() < interval:
x = 435344
x * x
x = x + random.randint(-12314, 10010)

return 'Hello from the backend. Backend computed %d' % x

The frontend (server.js) is an Express server that talks to the backend and presents the computation done by the Python backend wrapped up with a greeting of its own.

const express = require('express')
const request = require('request')
const app = express()
const port = 3000
app.get('/', (req, res) => {
// The frontend greets with the following:
const message = "Hello from the frontend..."
// The backend should greet us with "Hello from the backend."
request('http://ecsfs-backend.local:5000', (err, response, body) => {
// We send both greetings together on a GET request to /
res.send(message + " " + body);
})
})
app.listen(port, () => console.log(`Fontend app listening on port ${port}.`))

The only dependencies we need for the frontend is just express and request.

{
"name": "frontend",
"version": "1.0.0",
"description": "Frontend mocked up.",
"private": true,
"scripts": {
"start": "node server.js"
},
"author": "Ramon Blanquer <blanquer.ramon@gmail.com> (http://www.ramonblanquer.com)",
"license": "ISC",
"dependencies": {
"express": "^4.16.4",
"request": "^2.88.0"
}
}

Our nginx configuration file simply relays the requests to the frontend. I don’t bother for now with any headers nor HTTPS configuration to keep things simple.

server {
listen 80;
location / {
proxy_pass http://ecsfs-frontend.local:3000;
}
}
http://ecsfs-backend.local and https://ecsfs-fronted.local will be reachable once when we run the containers together on Docker Compose since they will share the same network.
When we deploy it on AWS it will be also reachable because the services will infer the same hostnames (ecsfs-frontend.local and ecsfs-backend.local) through the service discovery configuration.

Containerization with Docker

Let’s make a container for each of those services so we can deploy it to AWS.

Containers are isolated instances of a process that is running with its own environment and needed dependencies (run time, libraries, sytem tools, etc…) all contained 📦 within itself.

To make a container for your application Dockerfiles are used. They act as blueprint (or instructions if you prefer) on how to prepare the desired environment “snapshot” we want our application to run in.

You build Dockerfiles and produce an image. An image can be run and produces a container.

Let’s dive back into our case scenario. Our backend’s Dockerfile looks like this:

FROM python:alpine
RUN pip install Flask
WORKDIR /app
COPY app.py ./
ENV FLASK_APP app.py
CMD flask run --host=0.0.0.0

The FROM command pulls dependencies from another docker image. We pull an image that provides the runtime and environment needed to run Python applications. Alpine is a minimalistic lightweight linux distribution used a lot in containers because it has a quick start up time.

Then we install Flask, a minimal web Python framework, copy our files so they are available in the container. We set FLASK_APP which tells the name of the python file to run for server creation and we finally run it with the flask command. The flask executable is going to be available because pip can also install binaries.

You would never run a flask application in production like this, instead you would wrap it up with gunicorn or waitress.

In order to make an image out of this Dockerfile simply change directory to where the Dockerfile lives and then run docker build -t your-username/ecsfs-backend . (do not miss the period denoting current folder). That creates an image named your-username/ecsfs-backend.

Now run it with docker run -p 80:5000 your-username/ecsfs-backend and visit http://localhost

The option -p 80:5000 exposes the port 5000 from the container to port 80 on your host machine, the URL in your browser implicitly looks for port 80.

You should see the backend working! I get Hello from the backend. Backend computed 441134.

The Dockerfile for the frontend is very similar:

FROM node:alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY server.js ./
CMD npm start

In this case we use Express, a tiny NodeJS web application framework.

We copy first packages.json and the lock file which describes our application and its dependencies, then install all the required libraries first, copy the actual server files and finally run the Express application.

Every time Docker finds a RUN it makes an intermediate image. If we simply change the server.js file and build the image again, it doesn’t have to go through the npm install instruction anymore, it is smart and reuses the previous intermediate image, unless the .json files change of course. Neat! 😍

For nginx it’s even simpler:

FROM nginx:alpine
COPY default.conf /etc/nginx/conf.d/default.conf

Just a configuration file is required and needs to be copied to the right config folder.

Putting all containers together with Docker Compose

Docker Compose allows multi-container application management. It allows you to define all the containers you would like to run concurrently and describe their network properties as well (for example what ports they expose to the host).

version: "3"
services:
frontend:
restart: always
build: ./frontend
networks:
ecsfs:
aliases:
- ecsfs-frontend.local
backend:
restart: always
build: ./backend
networks:
ecsfs:
aliases:
- ecsfs-backend.local
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
networks:
ecsfs:
networks:
ecsfs:

By assigning the same network to all our services we allow them to talk to one another. The way they can reach each other is by using their service name in the docker-compose.yaml file as they hostname, that is, you could reach out to http://frontend or to http://backend from any of the networked running containers.

If you would like to have another valid hostname for the service you can add them as aliases. That’s what allows us to leave http://ecsfs-frontend.local in the application code and not having to worry about whether we are running the app in development (Docker Compose) or in production (on AWS, where the same hostname would result from setting up hosted zones and service names which we describe in part two).

To run all the containers change directory to where the docker-compose.yaml file exists and run docker up. Then check http://localhost.

Check all the available commands for docker-compose docker-compose help, keep these ones handy: docker-compose (build | up | ps | stop | kill | rm).

Pushing containers to Docker Hub

When we go on AWS land in order to fetch our containers they need to be pushed to a registry. ECS support images from Docker Hub and also from their private registry.

Considering you have already built the images namespaced with your Docker login name all you have to do is `docker push your-username/ecsfs-backend` and repeat for the frontend and nginx too.

What’s next?

We are all set up and ready to start working on how we are going to deploy our stack on AWS.

More by Ramon Blanquer

More Related Stories