A guide on deploying a full stack (nginx, frontend, backend) application on AWS using Docker, Fargate and CloudFormation.
Few weeks ago I started my journey through AWS. Now I got to a point where I feel confident to share my journey and some ideas that might help others.
The reason I am writing this article is because I saw many examples of backend-frontend-nginx stacks in Fargate but not in a way that you could scale each component independently.
On all those examples if you wanted more backend availability you would need to scale frontend and nginx with it because all containers were defined together and would run as one big service. It was not what I was after.
This guide is structured as follows:
Independently Scalable Multi-Container Microservices Architecture on AWS Fargate (II)_A VPC is simply a logically isolated chunk of the AWS Cloud. Our VPC has two public subnetworks since it's a…_hackernoon.com
Requirements for following up:
You can find the full application code on GitHub.
docwhite/ecsfs_Individually-Scalable Multi-Containerized Microservice Architecture Tutorial on AWS Fargate. - docwhite/ecsfs_github.com
I decided to take the microservices road for my new projects for various reasons, among them:
Not going to dive into microservices, there are many great articles about them.
I did my research and decided to try the Elastic Container Service (ECS), which is the Docker container orchestration service AWS provides. This service has two engines (or launch types as they call them) that can be used to manage your containers:
This article focuses on the latter.
As aforementioned I didn’t want an architecture that wouldn’t let me scale individual pieces independently, instead what I wanted is:
I prepared a barebones setup for each of three service components so we can focus on the architecture rather than the application itself.
I shall remark that the application code is not production ready, it’s just the very basic for demonstration purposes.
I called the project ECSFS (short for Elastic Container Service Full-Stack) and its folder structure is as follows:
backend/Dockerfileapp.py
frontend/Dockerfilepackages.jsonserver.js
nginx/Dockerfiledefault.conf
docker-compose.yamlstack.yaml
Our backend (app.py) is a Flask application that simulates an expensive computation and returns it in a formatted string.
import timeimport random
from flask import Flask
app = Flask(__name__)
@app.route('/')def greet():interval = time.time() + 1
# Simulates some CPU load.
while time.time() < interval:
x = 435344
x \* x
x = x + random.randint(-12314, 10010)
return 'Hello from the backend. Backend computed %d' % x
The frontend (server.js) is an Express server that talks to the backend and presents the computation done by the Python backend wrapped up with a greeting of its own.
const express = require('express')const request = require('request')
const app = express()const port = 3000
app.get('/', (req, res) => {
// The frontend greets with the following:const message = "Hello from the frontend..."
// The backend should greet us with "Hello from the backend."request('http://ecsfs-backend.local:5000', (err, response, body) => {// We send both greetings together on a GET request to /res.send(message + " " + body);})})
app.listen(port, () => console.log(`Fontend app listening on port ${port}.`))
The only dependencies we need for the frontend is just express and request.
{"name": "frontend","version": "1.0.0","description": "Frontend mocked up.","private": true,"scripts": {"start": "node server.js"},"author": "Ramon Blanquer <[email protected]> (http://www.ramonblanquer.com)","license": "ISC","dependencies": {"express": "^4.16.4","request": "^2.88.0"}}
Our nginx configuration file simply relays the requests to the frontend. I don’t bother for now with any headers nor HTTPS configuration to keep things simple.
server {listen 80;location / {proxy_pass http://ecsfs-frontend.local:3000;}}
http://ecsfs-backend.local and https://ecsfs-fronted.local will be reachable once when we run the containers together on Docker Compose since they will share the same network.
When we deploy it on AWS it will be also reachable because the services will infer the same hostnames (ecsfs-frontend.local and ecsfs-backend.local) through the service discovery configuration.
Let’s make a container for each of those services so we can deploy it to AWS.
Containers are isolated instances of a process that is running with its own environment and needed dependencies (run time, libraries, sytem tools, etc…) all contained 📦 within itself.
To make a container for your application Dockerfiles are used. They act as blueprint (or instructions if you prefer) on how to prepare the desired environment “snapshot” we want our application to run in.
You build Dockerfiles and produce an image. An image can be run and produces a container.
Let’s dive back into our case scenario. Our backend’s Dockerfile looks like this:
FROM python:alpineRUN pip install FlaskWORKDIR /appCOPY app.py ./ENV FLASK_APP app.pyCMD flask run --host=0.0.0.0
The FROM command pulls dependencies from another docker image. We pull an image that provides the runtime and environment needed to run Python applications. Alpine is a minimalistic lightweight linux distribution used a lot in containers because it has a quick start up time.
Then we install Flask, a minimal web Python framework, copy our files so they are available in the container. We set FLASK_APP which tells the name of the python file to run for server creation and we finally run it with the flask command. The flask executable is going to be available because pip can also install binaries.
You would never run a flask application in production like this, instead you would wrap it up with gunicorn or waitress.
In order to make an image out of this Dockerfile simply change directory to where the Dockerfile lives and then run docker build -t your-username/ecsfs-backend .
(do not miss the period denoting current folder). That creates an image named your-username/ecsfs-backend.
Now run it with docker run -p 80:5000 your-username/ecsfs-backend
and visit http://localhost
The option -p 80:5000 exposes the port 5000 from the container to port 80 on your host machine, the URL in your browser implicitly looks for port 80.
You should see the backend working! I get Hello from the backend. Backend computed 441134.
The Dockerfile for the frontend is very similar:
FROM node:alpineWORKDIR /appCOPY package.json package-lock.json ./RUN npm installCOPY server.js ./CMD npm start
In this case we use Express, a tiny NodeJS web application framework.
We copy first packages.json and the lock file which describes our application and its dependencies, then install all the required libraries first, copy the actual server files and finally run the Express application.
Every time Docker finds a RUN it makes an intermediate image. If we simply change the server.js file and build the image again, it doesn’t have to go through the npm install instruction anymore, it is smart and reuses the previous intermediate image, unless the .json files change of course. Neat! 😍
For nginx it’s even simpler:
FROM nginx:alpineCOPY default.conf /etc/nginx/conf.d/default.conf
Just a configuration file is required and needs to be copied to the right config folder.
Docker Compose allows multi-container application management. It allows you to define all the containers you would like to run concurrently and describe their network properties as well (for example what ports they expose to the host).
version: "3"services:frontend:restart: alwaysbuild: ./frontendnetworks:ecsfs:aliases:- ecsfs-frontend.localbackend:restart: alwaysbuild: ./backendnetworks:ecsfs:aliases:- ecsfs-backend.localnginx:restart: alwaysbuild: ./nginxports:- "80:80"networks:ecsfs:
networks:ecsfs:
By assigning the same network to all our services we allow them to talk to one another. The way they can reach each other is by using their service name in the docker-compose.yaml file as they hostname, that is, you could reach out to http://frontend or to http://backend from any of the networked running containers.
If you would like to have another valid hostname for the service you can add them as aliases. That’s what allows us to leave http://ecsfs-frontend.local in the application code and not having to worry about whether we are running the app in development (Docker Compose) or in production (on AWS, where the same hostname would result from setting up hosted zones and service names which we describe in part two).
To run all the containers change directory to where the docker-compose.yaml file exists and run docker up
. Then check http://localhost.
Check all the available commands for docker-compose docker-compose help
, keep these ones handy: docker-compose (build | up | ps | stop | kill | rm)
.
When we go on AWS land in order to fetch our containers they need to be pushed to a registry. ECS support images from Docker Hub and also from their private registry.
Considering you have already built the images namespaced with your Docker login name all you have to do is `docker push your-username/ecsfs-backend` and repeat for the frontend and nginx too.
We are all set up and ready to start working on how we are going to deploy our stack on AWS.
Independently Scalable Multi-Container Microservices Architecture on AWS Fargate (II)_A VPC is simply a logically isolated chunk of the AWS Cloud. Our VPC has two public subnetworks since it's a…_hackernoon.com