A guide on deploying a full stack (nginx, frontend, backend) application on AWS using Docker, Fargate and CloudFormation. Few weeks ago I started my journey through . Now I got to a point where I feel confident to share my journey and some ideas that might help others. AWS The reason I am writing this article is because I saw many examples of stacks in Fargate but not in a way that you could scale each component independently. backend-frontend-nginx On all those examples if you wanted more backend availability you would need to scale frontend and nginx with it because all containers were defined together and would run as one big service. It was not what I was after. This guide is structured as follows: : why microservices, technology, application code, basic docker commands and local development setup (in a way so you can reuse your containers both for local development on the cloud). Part 1 AWS and : Design of the stack and deployment using . Part 2 AWS CloudFormation _A VPC is simply a logically isolated chunk of the AWS Cloud. Our VPC has two public subnetworks since it's a…_hackernoon.com Independently Scalable Multi-Container Microservices Architecture on AWS Fargate (II) Requirements for following up: A account you can push your containers to. Docker Hub with a user that has IAM permissions. AWS account You can find the full application code on . GitHub _Individually-Scalable Multi-Containerized Microservice Architecture Tutorial on AWS Fargate. - docwhite/ecsfs_github.com docwhite/ecsfs Microservices I decided to take the microservices road for my new projects for various reasons, among them: Breaking a big application into smaller autonomous self-contained projects that can be individually developed and deployed. Scaling specific parts of the application becomes very easy as opposed to scaling a monolithic application. If one service fails it doesn’t bring the whole application down (or at least it shouldn’t 😅). Not going to dive into microservices, there are many great articles about them. Cloud Resources I did my research and decided to try the , which is the container orchestration service provides. This service has two engines (or launch types as they call them) that can be used to manage your containers: Elastic Container Service (ECS) Docker AWS : You run containers on a group of (Elastic Compute Cloud) instances that you manage. I myself haven’t played with that option but I imagine you need to chose the Images, machine specs and further configuration in regards to the provisioning of the instance. EC2 EC2 : There is no need to manage and provision instances. You simply describe your containers and tell how much memory and CPU you desire for each. Amazon does the rest. Fargate EC2 This article focuses on the latter. Our Hello World Multi-Container App As aforementioned I didn’t want an architecture that wouldn’t let me scale individual pieces independently, instead what I wanted is: ( ) sitting in front. Public facing. Reverse proxy server nginx . Just reachable by the clients through the nginx service. Frontend I could scale independently because it would handle expensive computation. Private and only accessible internally by the frontend and , never from outside directly. Backend nginx I prepared a barebones setup for each of three service components so we can focus on the architecture rather than the application itself. I shall remark that the application code is not production ready, it’s just the very basic for demonstration purposes. I called the project (short for Elastic Container Service Full-Stack) and its folder structure is as follows: ECSFS backend/Dockerfileapp.py frontend/Dockerfilepackages.jsonserver.js nginx/Dockerfiledefault.conf docker-compose.yamlstack.yaml Our (app.py) is a application that simulates an expensive computation and returns it in a formatted string. backend Flask import timeimport random from flask import Flask app = Flask(__name__) .route('/')def greet():interval = time.time() + 1 @app # Simulates some CPU load. while time.time() < interval: x = 435344 x \* x x = x + random.randint(-12314, 10010) return 'Hello from the backend. Backend computed %d' % x The (server.js) is an server that talks to the backend and presents the computation done by the Python backend wrapped up with a greeting of its own. frontend Express const express = require('express')const request = require('request') const app = express()const port = 3000 app.get('/', (req, res) => { // The frontend greets with the following:const message = "Hello from the frontend..." // The backend should greet us with "Hello from the backend."request(' , (err, response, body) => {// We send both greetings together on a GET request to /res.send(message + " " + body);})}) http://ecsfs-backend.local:5000' app.listen(port, () => console.log(`Fontend app listening on port ${port}.`)) The only dependencies we need for the frontend is just and . express request {"name": "frontend","version": "1.0.0","description": "Frontend mocked up.","private": true,"scripts": {"start": "node server.js"},"author": "Ramon Blanquer < > ( )","license": "ISC","dependencies": {"express": "^4.16.4","request": "^2.88.0"}} blanquer.ramon@gmail.com http://www.ramonblanquer.com Our configuration file simply relays the requests to the frontend. I don’t bother for now with any headers nor HTTPS configuration to keep things simple. nginx server {listen 80;location / {proxy_pass ;}} http://ecsfs-frontend.local:3000 and will be reachable once when we run the containers together on Docker Compose since they will share the same network. http://ecsfs-backend.local https://ecsfs-fronted.local When we deploy it on AWS it will be also reachable because the services will infer the same hostnames (ecsfs-frontend.local and ecsfs-backend.local) through the service discovery configuration. Containerization with Docker Let’s make a container for each of those services so we can deploy it to . AWS Containers are isolated instances of a process that is running with its own environment and needed dependencies (run time, libraries, sytem tools, etc…) all contained 📦 within itself. To make a container for your application are used. They act as blueprint (or instructions if you prefer) on how to prepare the desired environment “snapshot” we want our application to run in. Dockerfiles You and produce an . An image can be and produces a . build Dockerfiles image run container Let’s dive back into our case scenario. Our ’s looks like this: backend Dockerfile FROM python:alpineRUN pip install FlaskWORKDIR /appCOPY app.py ./ENV FLASK_APP app.pyCMD flask run --host=0.0.0.0 The command pulls dependencies from another docker image. We pull an image that provides the runtime and environment needed to run Python applications. is a minimalistic lightweight linux distribution used a lot in containers because it has a quick start up time. FROM Alpine Then we install , a minimal web Python framework, copy our files so they are available in the container. We set which tells the name of the python file to run for server creation and we finally run it with the command. The executable is going to be available because can also install binaries. Flask FLASK_APP flask flask pip You would never run a flask application in production like this, instead you would wrap it up with or . gunicorn waitress In order to make an image out of this Dockerfile simply change directory to where the lives and then run (do not miss the period denoting current folder). That creates an image named . Dockerfile docker build -t your-username/ecsfs-backend . your-username/ecsfs-backend Now run it with and visit docker run -p 80:5000 your-username/ecsfs-backend http://localhost The option -p 80:5000 exposes the port 5000 from the container to port 80 on your host machine, the URL in your browser implicitly looks for port 80. You should see the backend working! I get Hello from the backend. Backend computed 441134. The Dockerfile for the is very similar: frontend FROM node:alpineWORKDIR /appCOPY package.json package-lock.json ./RUN npm installCOPY server.js ./CMD npm start In this case we use , a tiny web application framework. Express NodeJS We copy first and the lock file which describes our application and its dependencies, then install all the required libraries first, copy the actual server files and finally run the Express application. packages.json Every time Docker finds a it makes an intermediate image. If we simply change the server.js file and build the image again, it doesn’t have to go through the instruction anymore, it is smart and reuses the previous intermediate image, unless the files change of course. Neat! 😍 RUN npm install .json For it’s even simpler: nginx FROM nginx:alpineCOPY default.conf /etc/nginx/conf.d/default.conf Just a configuration file is required and needs to be copied to the right config folder. Putting all containers together with Docker Compose allows multi-container application management. It allows you to define all the containers you would like to run concurrently and describe their network properties as well (for example what ports they expose to the host). Docker Compose version: "3"services:frontend:restart: alwaysbuild: ./frontendnetworks:ecsfs:aliases:- ecsfs-frontend.localbackend:restart: alwaysbuild: ./backendnetworks:ecsfs:aliases:- ecsfs-backend.localnginx:restart: alwaysbuild: ./nginxports:- "80:80"networks:ecsfs: networks:ecsfs: By assigning the same network to all our services we allow them to talk to one another. The way they can reach each other is by using their service name in the file as they hostname, that is, you could reach out to or to from any of the networked running containers. docker-compose.yaml http://frontend http://backend If you would like to have another valid hostname for the service you can add them as . That’s what allows us to leave in the application code and not having to worry about whether we are running the app in development ( ) or in production (on , where the same hostname would result from setting up hosted zones and service names which we describe in part two). aliases http://ecsfs-frontend.local Docker Compose AWS To run all the containers change directory to where the docker-compose.yaml file exists and run . Then check . docker up http://localhost Check all the available commands for docker-compose , keep these ones handy: . docker-compose help docker-compose (build | up | ps | stop | kill | rm) Pushing containers to Docker Hub When we go on land in order to fetch our containers they need to be pushed to a registry. ECS support images from Docker Hub and also from their private registry. AWS Considering you have already built the images namespaced with your Docker login name all you have to do is ` and repeat for the frontend and nginx too. docker push your-username/ecsfs-backend` What’s next? We are all set up and ready to start working on how we are going to deploy our stack on . AWS _A VPC is simply a logically isolated chunk of the AWS Cloud. Our VPC has two public subnetworks since it's a…_hackernoon.com Independently Scalable Multi-Container Microservices Architecture on AWS Fargate (II)