paint-brush
Zero Downtime Deployment: Upgrade Your Dockerized App With the Blue-Green Techniqueby@abram
1,591 reads
1,591 reads

Zero Downtime Deployment: Upgrade Your Dockerized App With the Blue-Green Technique

by AbramApril 18th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

In the previous article, we talked about how to continuously deploy your python web application to a production (or pre-dev/staging) environment. How do you go about deploying your Dockerized Python application without causing any downtimes? Let’s jump right into it. The most efficient way that has worked for me, is the blue-green technique.
featured image - Zero Downtime Deployment: Upgrade Your Dockerized App With the Blue-Green Technique
Abram HackerNoon profile picture

In an article I wrote a month ago, I talked about how to continuously deploy your Python web application to a production (or pre-dev/staging) environment.


You can refer to the previous article to deploy your application in any other language. Just be sure to modify the workflow file.


I was recently tasked to build a microservice (using Python and FastAPI) to match two voices and give a prediction score if they were both a match or not. The stakeholders had requested a voice unlock feature.


We had an engineering meeting, and I stood up to take the task (or my lead stood up for me, haha).


It was an interesting task, as I have never worked (trained or whatnot) with an ML model before. It took me a week to design, build and ship the code to an AWS EC2 instance. I am a big fan of CI/CD, so I used what I was most comfortable with- GitHub Actions.


A week later… Changes were requested, and I wanted to try out a new [deployment] technique that I had been researching. I needed the dockerized microservice application running gracefully on AWS EC2 Instance to not experience any downtime when I re-deploy.


And I had the perfect trick up in my sleeves.


That trick is: the blue-green technique.


According to AWS Whitepaper on Blue/Green Deployments, it is a deployment strategy in which you create two separate, but identical environments.


One environment (blue) is running the current application version and one environment (green) is running the new application version. Using a blue/green deployment strategy increases application availability and reduces deployment risk by simplifying the rollback process if a deployment fails.


Once testing has been completed on the green environment, live application traffic is directed to the green environment and the blue environment is deprecated.


In simple terms, the blue/green deployment technique is a way to reduce downtime and risk by running two identical production environments.


If this is your first time hearing such a deployment technique, there is absolutely nothing to be afraid of, I will provide you with detailed steps that will aid you to achieve blue-green deployment.


We shall be using an imaginary product for example purposes as I do not want to walk through the deployment steps with the product I had built for my company due to NDA reasons. Haha.


Let’s get right into the steps:


  1. Start by building a new docker image with your updated code, and tag it with a new version number.


$ docker build -t myexample:v2 .


This will create a new Docker image with the tag myexample:v2 using the Dockerfile in the current directory.


Where myexample:v2 is the name of the newly build docker image. In my case, it was the name of the ML project. E.g., docker build -t companyx-servicename-backend:v2


  1. Start a new Docker container from the new image, but don't expose it to the outside world yet.


$ docker run -d --name myexample-v2 myexample:v2


This will start a new Docker container with the name myexample-v2 from the myexample:v2 image.


  1. Wait for the new container to start and initialize, making sure it's functioning properly.


$ docker logs myexample-v2


Use the docker logs command to check the logs of the new container to make sure it has started and initialized properly.


  1. Use a reverse proxy, such as NGINX, to route traffic to both the old and new containers. Configure your reverse proxy to listen for requests, and forward them to both the old and new containers. This will allow you to gradually shift traffic to the new container.


Here's an example of an NGINX configuration that routes two containers:


upstream myexample {
    server myexample-v1:8000;
    server myexample-v2:8000 backup;
}

server {
    listen 80;
    server_name myexample.com;

    location / {
        proxy_pass http://myexample;
    }
}


This configuration sets up an upstream group called myexample with two servers: myexample-v1 and myexample-v2. The backup keyword is used to mark the second server as a backup. The proxy_pass directive is used to forward requests to the myexample upstream group.


Make sure to update the reverse proxy configuration to reflect the name and port of your application.


  1. Gradually shift traffic from the old container to the new container by adjusting the reverse proxy configuration.


To shift traffic to the new container, adjust the reverse proxy configuration to give more weight to the new container. This can be done by removing the backup keyword from the server directive:


upstream myexample {
    server myexample-v1:8000;
    server myexample-v2:8000;
}

server {
    listen 80;
    server_name myexample.com;

    location / {
        proxy_pass http://myexample;
    }
}


This will cause NGINX to send more requests to the myexample-v2 container. Monitor your application, and adjust the configuration as needed until all traffic is flowing to the new container.


  1. Once all traffic is flowing to the new container, you can safely stop and remove the old container.


$ docker stop myexample-v1
$ docker rm myexample-v1


This will stop and remove the old container, freeing up resources on the server.


Conclusion

If your application relies on a relational database, the blue-green deployment strategy may cause inconsistencies to arise between the Blue and Green databases when updates are made.


To ensure the highest level of data integrity, it's recommended to set up a unified database that's compatible with both past and future versions.


I’m new to this technique, and obviously, don’t know much about it. But I’m going to continue to learn about its tradeoffs and other techniques that will work better. Do you have a trick or two up in your sleeve that you’d love to share with me?


I’ll appreciate it. Let me have it (in the comment section).


Oh, oh. Don’t forget to subscribe to my boring newsletter. I have learned a lot of things since Q1 and will be sharing them soon. Ciao.