In this article i want to summarize all things that u will need to make good dev environment and deployment for a small application.To make this happen we will use AWS Free Tire and Docker containers and orchestration and Django app as a typical project
Link on this project github: https://github.com/creotiv/aws-docker-example
Before go farther please install Docker first: https://docs.docker.com/install/linux/docker-ce/ubuntu/
All code running under Python 3.6
Docker is a container virtualization engine that gives u ability to create cheap and fast environments for production and development use. Containers are not virtual machines. Key idea of containers is to make them as thin as possible. So you cant run Windows container on Linus system. Here is good image to see the difference.
Two main parts of Docker system is images and containers.
Image — is a clear build of environment without any working state
Container — is a running/stopped image with some state(DB records created, files uploaded, etc.)
Awesome thing in Images is that they created with just one file (Dockerfile) and thus can be stored in Git/Docker Hub, simple transferred across the net. This makes CD/CI much simpler and lightweight.
In most cases docker-compose tool is used for running all containers on local dev/CI/CD machine with the same configuration as they run in production. But also it can be used for single-machine deploy scenario(but we will use Docker Swarm for this)
Docker Swarm — is a service for container orchestration, basically it control running of containers on different machines with different configs and manage all communication between them.Docker Swarm not only orchestration engine that exist, there are also Kubernets, AWS ECS, Azure Container Service, Google Container Engine, and else.For big production application i recommend to use Kubernets. But for this simple demo we will use Docker Swarm as it the simplest and good choice for small/medium projects.
Before making any changes on remote machines it’s good to test things locally.
Our application will use 3 containers for Django+Gunicorn, Nginx, PosgreSQL instance, and for each of them we need an image.
Before trying to write some big Dockerfile with plenty of installation code it good to search it on Docker Hub(aka github for docker images).We will search for Python, Nginx, PostgreSQL. For example here is python image https://hub.docker.com/_/pythonFor all our images we will use most latest Alpine Linus images. We will not write Dockerfile for PostgreSQL as we will use clear image without any modifications.
Dockerfile for App:
<a href="https://medium.com/media/6fad06d5ee97ef80b99340e4fad84a72/href">https://medium.com/media/6fad06d5ee97ef80b99340e4fad84a72/href</a>
Dockerfile for Nginx
<a href="https://medium.com/media/446b945d5924dea3277ff6213fc7d2d7/href">https://medium.com/media/446b945d5924dea3277ff6213fc7d2d7/href</a>
Now let’s try to understand what happening here.
FROM — used to set image which we are using as a starting point of our modifications. In out first case we use python image with tag 3.7.2-alpine3.9.if we dont need previous image we cant set it from scratch “From scratch”
ENV — set an environment variable.
RUN — running a command.
WORKDIR — change base directory of image. if directory doesn’t exists it creates it.
COPY — copy data from local fs to image fs
ENTRYPOINT —command that will be run on each container initialization from this image.
So for application container we installing some needed system libs, then creating app dir at /usr/src/app, copy our app requirements there, install them, copy our app dir.
We use our entrypoint to control PostgreSQL initialization(because Swarm control only container initialization and not control services in it) and for Django updates.Here is another way of controlling services: https://docs.docker.com/compose/startup-order/
<a href="https://medium.com/media/7ef6ea72b1cdc1febeae0e3f88231cc6/href">https://medium.com/media/7ef6ea72b1cdc1febeae0e3f88231cc6/href</a>
After we understand how this work, lets build our first imagesdocker build -t [TITLE][DIRECTORY WITH DOCKERFILE]
docker build -t awsdemo-app appdocker build -t awsdemo-nginx nginx
Now if we call docker images we will see our two built images :)
It good practices to use different version of docker-compose.yml files for different environments.So we have also two of them docker-compose.prod.yml for prod, and docker-compose.yml for dev enviroment (The difference only in DEBUG mode and code reload param for gunicorn).
<a href="https://medium.com/media/bcddba8f543d82f0a64845d5b9dc2dbe/href">https://medium.com/media/bcddba8f543d82f0a64845d5b9dc2dbe/href</a>
First, there are many versions of docker and docker-compose syntax. We will using the latest 3rd version.
Services — setting up containersVolumes — setting up persistent storage for our data(on local machine, cause we dont want lost DB or files after each container restart)Networks — setting up network connections between containers(For our case we will use simplest network type — mesh network)
Each service have a name that also a domain in our network.Image — setting up image from which container will be created. First local images, then remote images(Docker Hub, etc)Command — command to run after container initialized. Running Gunicorn with code hot-reload on port 8000Ports — linking ports between docker and local mahcineVolumes — Linking up volumes to instance. Here ./app/:/usr/src/app/ we linking our local app dir to container app dir, so we can change the code in run-time without container restartEnvironment — setting up container env variablesDepends_on — setting up dependencies(initialization order)Networks — connection to networks
More about docker compose syntax and params here: https://docs.docker.com/compose/overview/
Now lets run docker-compose up and open http://localhost/ to see result.
If things are working we can go to deploy stage.
By default AWS gives you 750h of EC T3.micro instance.We will run our project on Ubuntu Server 18.04 LTS (HVM), SSD Volume Type - ami-34c14f4a
Click ‘Launch Instance’
Choose Ubuntu server 18.04
Choose t3.micro instance type
Click next until security group page. Here you will need to add All Tcp to firewall rules, so you can access any tcp port from public(for production set only needed ports)
Click next until security group page
In the end it will create ssh key which we will use to access to the instance.
For devops thing i would recommend use Fabric2 for small/medium projects and Ansible for medium/big projects. But today we will use Fabric2 because it simple as a door.
First let’s install it pip install fabric2
What Fabric do is basically connect to host over ssh and run some commands that can be defined as tasks. Entrypoint for it is fabfile.py and to run it just call fab2 COMMAND
<a href="https://medium.com/media/5a52f2c25cde344e56a27aab3d114ff6/href">https://medium.com/media/5a52f2c25cde344e56a27aab3d114ff6/href</a>
So what we have 4 task here: install-instance, set-pass, deploy, pgdumpAfter we create instance on AWS we need to install many things on it before we can run docker. I used simple command per line approach, but it also can be inside of some shell file and we can just run it.
Dont forget to change HOST and KEY_FILE params
Your domain you can find in AWS EC dashboard in instance params (Public DNS (IPv4))
6. 37# adding github domain to known host. in other case it will ask this on first pull
7. 38–41# Creating app dir and cloning it fro github
8. 42–43# Building images
9. 44# Add our containers to the swarm. Stack is a group of services that running together.
10. 45–51# waiting for web container and trying to set django super user password
$(docker ps -q -f name=awsdemo_web) — used to get service container id. This will not work if we have replication with more then one container for service.
For deploy we just getting all changes from github and then running service update.Updates in a Swarm mode running without breaking connections. Old container will run until all old queries will not be proceeded. All new queries will go to the new container instance.
Here we first run pg_dump on db container, then copy dumped data from it to remote machine fs, and then use Fabric2 Class Transfer to download file to our local machine.
Now let’s install all this things.
fab2 install-instance
fab2 set-pass
Now if you go to your AWS domain you should see
Hope it was useful :)