Streamline Your Python Backend Deployment with Automated Continuous Integration using Github Actions

Written by abram | Published 2023/03/01
Tech Story Tags: automation | programming | cicd | continuous-deployment | continuous-integration | devops | python | tutorial | web-monetization

TLDRThis article will not walk you through a first-time deployment to any VPS of your choice. What it focuses on instead is showing you how to automate your python backend application deployment continuously.via the TL;DR App

This article will not walk you through a first-time deployment to any VPS of your choice. What it focuses on instead is showing you how to automate your python backend application deployment continuously. Let’s get started.

You recently developed and deployed an application, and the client or product team has requested that you add a new feature. Or, you fixed a bug and need the changes reflected on the production server.

What do you do? Bite your fingers? (Uh…) Scratch your head? Say it’s not possible? (Okay, now, c’mon!)

You most probably have the application’s source code on Github. If so, then there is no reason for you to worry. I will show you how easy it is to automate your application deployment process, continuously.

Github Actions

What is GA?

GA probably means the god of Abram. Who knows? No?

Okay.

“GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production.“ - docs.github.com

What the above-quoted statement is saying, is, we can prep (run unit tests, etc) on our application before deploying it onto the dev, staging, pre-production and even production server.

Actions Workflow

An action workflow is a configurable automated process that will run one or more jobs. Workflows are defined by a YAML file checked into your repository and will run when triggered by an event, or triggered manually, or at a defined schedule.

Start a new terminal session and write the following command to create the workflows directory:

mkdir .github/workflows/

Create a workflow file and name it python-deploy.yml. Copy and paste the following codes into the newly created workflow:

name: Pull Changes & Deploy
on:
    push:
        branches:
          - dev

jobs:
    pull_changes:
        runs-on: ubuntu-latest
        steps:
          - name: Pull New Changes
            uses: appleboy/[email protected]
            with:
                host: ${{ secrets.SSH_HOST }}
                port: ${{ secrets.SSH_PORT }}
                username: ${{ secrets.SSH_USERNAME }}
                key: ${{ secrets.SSH_KEY }}
                script: |
                  cd /path/to/project_dir
                  git pull origin dev

    deploy_backend:
        runs-on: ubuntu-latest
        steps:
        - name: Deploy Backend
          uses: appleboy/[email protected]
          with:
            host: ${{ secrets.SSH_HOST }}
            port: ${{ secrets.SSH_PORT }}
            username: ${{ secrets.SSH_USERNAME }}
            key: ${{ secrets.SSH_KEY }} 
            script: |
                cd /path/to/project_dir
              # ---------------------------------
              # INSERT COMMAND(S) HERE
              # ---------------------------------

Let’s break down the above workflow:

  • name: is what you’d want to call this workflow.

  • on: is a trigger you want to listen for to tell this workflow when to run. In our case, when there is a new push to the “dev” branch.

  • jobs: These are jobs that you need the workflow to run.

  • pull_changes: is the name of the job that would be responsible for pulling new changes into our application.

  • deploy_backend: is the name of the job that would be responsible for deploying our application.

  • runs-on: the docker image you want this workflow to run on. I would recommend ubuntu-latest as it is the preferred container to test your code on, and most likely- what your code would run on, in production.

  • steps: these are the list of steps you want the workflow to run.

  • name: the name of the step.

  • uses appleboy/[email protected]: this action will allow you to execute remote ssh commands into the environment (dev, staging, pre-prod, prod) you want to. Read more here.

  • with: these are required configurations required by the action.

  • host: is the IP address of the server you wish to SSH into.

  • port: is the port number of the server you wish to SSH. The default is 22.

  • username: is the user of the server you want to SSH. The default is ubuntu.

  • key: is the private key of the server or pem file for the ec2 instance.

  • script: these are a list of commands you wish for the action to execute. Replace /path/to/project_dir with the directory where your application code is. Replace dev to the branch you wish to pull changes from.

Do not forget to add the secrets: SSH_HOST, SSH_PORT, SSH_USERNAME and SSH_KEY to your repository secrets.

In the section where you see: "INSERT COMMAND(S) HERE", kindly replace the entire comment with the commands you want to execute.

Framework Deployment Command Examples

1). For Django, update with the following:

# using docker - option: 1
# --------------------------------
# run database migrations, etc
sudo docker-compose run <compose_service_name> python manage.py makemigrations
sudo docker-compose run <compose_service_name> python manage.py migrate

# using gunicorn - option 2
# -----------------------------------
source venv/bin/activate
python manage.py makemigrations
python manage.py migrate
systemctl restart gunicorn # add other services like celery

Replace <compose_service_name> with the name of the service.

2). For FastAPI, update with the following:

# using docker - option 1
# --------------------------------------------------------------------------
# run database migrations, etc
sudo docker-compose run <compose_service_name> alembic upgrade head


# without docker - option 2
# -------------------------------
# run database migrations, etc
# alembic upgrade head
# -------------------------------

Replace <compose_service_name> with the name of the service.

3). For Flask, update with the following:

# using docker - option 1
# -----------------------------
# run database migrations, etc
sudo docker-compose run <compose_service_name> alembic upgrade head


# without docker - option 2
# -------------------------------
# run database migrations, etc
# alembic upgrade head
# -------------------------------

Replace <compose_service_name> with the name of the service.

Secrets Setup & Hour of Truth

The hour of truth is about to reveal itself. I will be using a ledger backend system that I built with FastAPI for demonstration purposes.

You can follow along with the application you wish to re-deploy. Go to your repository settings tab and click on the “Security” tab and click on “Actions”. Find the image below of what you will see if you took the steps:

Click on the button that reads “New repository secret” and add the secrets needed for the action workflow to work.

Remember that you are adding the following: SSH_HOST, SSH_PORT, SSH_USERNAME and SSH_KEY.

Your application source code is ready to be re-deployed. When you make a new push, you are going to see a yellow circle notifier at the far right of the commit. If you click on it, you are going to see the jobs that have been queued to run or that are currently running.

Click on either of the job “Details”. If you add the repository secrets, and updated the python-deploy.yml workflow file to fit your needs. You should see that your application has been redeployed successfully.

Conclusion

If you made it this far, congratulations.

You trusted in me; and followed me from the start to the very end. I am truly honoured.

The use of Continuous Deployment (CD) enables teams to deliver frequent and fast updates to customers. The implementation of CI/CD results in an enhanced velocity of the whole team, which includes the release of new features and bug fixes.

I must let you know that I will be terminating the EC2 instance to avoid getting billed, haha.

I am active on LinkedIn. Reach out to me if you have any questions, or are looking for a backend engineer to join your remote product/engineering team.


Written by abram | Engineering Musings: A Collection of My Brain's Ramblings
Published by HackerNoon on 2023/03/01