paint-brush
Deploying on AWS Free Tire with Docker and Fabricby@a.nikishaev
4,058 reads
4,058 reads

Deploying on AWS Free Tire with Docker and Fabric

by Andrey NikishaevMarch 6th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this article i want to summarize all things that u will need to make good dev environment and deployment for a small application. <br>To make this happen we will use <a href="https://aws.amazon.com/free/">AWS Free Tire</a> and <a href="https://www.docker.com/">Docker</a> containers and orchestration and <a href="https://www.djangoproject.com/">Django</a> app as a typical&nbsp;project

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Deploying on AWS Free Tire with Docker and Fabric
Andrey Nikishaev HackerNoon profile picture


In this article i want to summarize all things that u will need to make good dev environment and deployment for a small application.To make this happen we will use AWS Free Tire and Docker containers and orchestration and Django app as a typical project

Link on this project github: https://github.com/creotiv/aws-docker-example

Before go farther please install Docker first: https://docs.docker.com/install/linux/docker-ce/ubuntu/

All code running under Python 3.6

Docker

Docker is a container virtualization engine that gives u ability to create cheap and fast environments for production and development use. Containers are not virtual machines. Key idea of containers is to make them as thin as possible. So you cant run Windows container on Linus system. Here is good image to see the difference.

Docker image & container

Two main parts of Docker system is images and containers.

Image — is a clear build of environment without any working state

Container — is a running/stopped image with some state(DB records created, files uploaded, etc.)

Awesome thing in Images is that they created with just one file (Dockerfile) and thus can be stored in Git/Docker Hub, simple transferred across the net. This makes CD/CI much simpler and lightweight.

Docker compose

In most cases docker-compose tool is used for running all containers on local dev/CI/CD machine with the same configuration as they run in production. But also it can be used for single-machine deploy scenario(but we will use Docker Swarm for this)

Orchestration



Docker Swarm — is a service for container orchestration, basically it control running of containers on different machines with different configs and manage all communication between them.Docker Swarm not only orchestration engine that exist, there are also Kubernets, AWS ECS, Azure Container Service, Google Container Engine, and else.For big production application i recommend to use Kubernets. But for this simple demo we will use Docker Swarm as it the simplest and good choice for small/medium projects.

Testing things locally

Before making any changes on remote machines it’s good to test things locally.

Our application will use 3 containers for Django+Gunicorn, Nginx, PosgreSQL instance, and for each of them we need an image.

Making images with Dockerfiles



Before trying to write some big Dockerfile with plenty of installation code it good to search it on Docker Hub(aka github for docker images).We will search for Python, Nginx, PostgreSQL. For example here is python image https://hub.docker.com/_/pythonFor all our images we will use most latest Alpine Linus images. We will not write Dockerfile for PostgreSQL as we will use clear image without any modifications.

Dockerfile for App:

<a href="https://medium.com/media/6fad06d5ee97ef80b99340e4fad84a72/href">https://medium.com/media/6fad06d5ee97ef80b99340e4fad84a72/href</a>

Dockerfile for Nginx

<a href="https://medium.com/media/446b945d5924dea3277ff6213fc7d2d7/href">https://medium.com/media/446b945d5924dea3277ff6213fc7d2d7/href</a>

Now let’s try to understand what happening here.


FROM — used to set image which we are using as a starting point of our modifications. In out first case we use python image with tag 3.7.2-alpine3.9.if we dont need previous image we cant set it from scratch “From scratch”

ENV — set an environment variable.

RUN — running a command.

WORKDIR — change base directory of image. if directory doesn’t exists it creates it.

COPY — copy data from local fs to image fs

ENTRYPOINT —command that will be run on each container initialization from this image.

So for application container we installing some needed system libs, then creating app dir at /usr/src/app, copy our app requirements there, install them, copy our app dir.


We use our entrypoint to control PostgreSQL initialization(because Swarm control only container initialization and not control services in it) and for Django updates.Here is another way of controlling services: https://docs.docker.com/compose/startup-order/

<a href="https://medium.com/media/7ef6ea72b1cdc1febeae0e3f88231cc6/href">https://medium.com/media/7ef6ea72b1cdc1febeae0e3f88231cc6/href</a>


After we understand how this work, lets build our first imagesdocker build -t [TITLE][DIRECTORY WITH DOCKERFILE]


docker build -t awsdemo-app appdocker build -t awsdemo-nginx nginx

Now if we call docker images we will see our two built images :)

Understanding docker-compose


It good practices to use different version of docker-compose.yml files for different environments.So we have also two of them docker-compose.prod.yml for prod, and docker-compose.yml for dev enviroment (The difference only in DEBUG mode and code reload param for gunicorn).

<a href="https://medium.com/media/bcddba8f543d82f0a64845d5b9dc2dbe/href">https://medium.com/media/bcddba8f543d82f0a64845d5b9dc2dbe/href</a>

First, there are many versions of docker and docker-compose syntax. We will using the latest 3rd version.



Services — setting up containersVolumes — setting up persistent storage for our data(on local machine, cause we dont want lost DB or files after each container restart)Networks — setting up network connections between containers(For our case we will use simplest network type — mesh network)








Each service have a name that also a domain in our network.Image — setting up image from which container will be created. First local images, then remote images(Docker Hub, etc)Command — command to run after container initialized. Running Gunicorn with code hot-reload on port 8000Ports — linking ports between docker and local mahcineVolumes — Linking up volumes to instance. Here ./app/:/usr/src/app/ we linking our local app dir to container app dir, so we can change the code in run-time without container restartEnvironment — setting up container env variablesDepends_on — setting up dependencies(initialization order)Networks — connection to networks

More about docker compose syntax and params here: https://docs.docker.com/compose/overview/

Now lets run docker-compose up and open http://localhost/ to see result.

If things are working we can go to deploy stage.

Deploy

AWS Free Tire


By default AWS gives you 750h of EC T3.micro instance.We will run our project on Ubuntu Server 18.04 LTS (HVM), SSD Volume Type - ami-34c14f4a

Click ‘Launch Instance’

Choose Ubuntu server 18.04

Choose t3.micro instance type

Click next until security group page. Here you will need to add All Tcp to firewall rules, so you can access any tcp port from public(for production set only needed ports)

Click next until security group page

In the end it will create ssh key which we will use to access to the instance.

Deploying with Fabric2

For devops thing i would recommend use Fabric2 for small/medium projects and Ansible for medium/big projects. But today we will use Fabric2 because it simple as a door.

First let’s install it pip install fabric2

What Fabric do is basically connect to host over ssh and run some commands that can be defined as tasks. Entrypoint for it is fabfile.py and to run it just call fab2 COMMAND

<a href="https://medium.com/media/5a52f2c25cde344e56a27aab3d114ff6/href">https://medium.com/media/5a52f2c25cde344e56a27aab3d114ff6/href</a>


So what we have 4 task here: install-instance, set-pass, deploy, pgdumpAfter we create instance on AWS we need to install many things on it before we can run docker. I used simple command per line approach, but it also can be inside of some shell file and we can just run it.

Dont forget to change HOST and KEY_FILE params

Your domain you can find in AWS EC dashboard in instance params (Public DNS (IPv4))

install-instance

  1. 19# check if docker is running if not run install ops.
  2. 21–28# install docker stuff
  3. 29# Initialize docker swarm main server
  4. 30–33# create ssh key to be able to pull from private github
  5. 34–36# waiting while user will add key to github ssh keys

6. 37# adding github domain to known host. in other case it will ask this on first pull

7. 38–41# Creating app dir and cloning it fro github

8. 42–43# Building images

9. 44# Add our containers to the swarm. Stack is a group of services that running together.

10. 45–51# waiting for web container and trying to set django super user password

$(docker ps -q -f name=awsdemo_web) — used to get service container id. This will not work if we have replication with more then one container for service.

deploy


For deploy we just getting all changes from github and then running service update.Updates in a Swarm mode running without breaking connections. Old container will run until all old queries will not be proceeded. All new queries will go to the new container instance.

pgdump

Here we first run pg_dump on db container, then copy dumped data from it to remote machine fs, and then use Fabric2 Class Transfer to download file to our local machine.

Now let’s install all this things.

fab2 install-instance
fab2 set-pass

Now if you go to your AWS domain you should see

Hope it was useful :)