In this step by a step blog post, that illustrates how to integrate Python Flask applications with Docker and run them in a Kubernetes cluster, we will cover the following topics:
Before proceeding, make sure that your environment satisfies these requirements. Start by installing the following dependencies on your machine.
The application that we will use during this post is a simple Python application that is used as a wrapper for the weather API OpenWeatherMap. The application has the following HTTP endpoints
The complete application source code is shown below. The application simply processes the requests and forwards them to https://samples.openweathermap.org weather API endpoint then responds with the same data retrieved from the API endpoint.
from flask import Flask
import requests
app = Flask(__name__)
API_KEY = "b6907d289e10d714a6e88b30761fae22"
@app.route('/')
def index():
return 'App Works!'
@app.route('/<string:city>/<string:country>/')
def weather_by_city(country, city):
url = 'https://samples.openweathermap.org/data/2.5/weather'
params = dict(
q=city + "," + country,
appid= API_KEY,
)
response = requests.get(url=url, params=params)
data = response.json()
return data
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000)
Dockerizing python applications is a straightforward and easy task. To do this, we need to introduce the following files to the project:
certifi==2019.9.11
chardet==3.0.4
Click==7.0
Flask==1.1.1
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10.3
MarkupSafe==1.1.1
requests==2.22.0
urllib3==1.25.7
Werkzeug==0.16.0
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
CMD [ "python", "app.py" ]
We can now build the Docker image of our application using the below command:
$> docker build -t weather:v1.0
We can run the application locally using Docker CLI as shown below:
$> docker run -dit --rm -p 5000:5000 --name weather weather:v1.0
Or we can use a Docker Compose file to manage the build and deployment of the application in a local development environment. For instance, the below Compose file will take care of building the Docker image for the application and deploying it.
version: '3.6'
services:
weather:
build: .
ports:
- "5000:5000"
volumes:
- .:/app
Running the application using docker-compose can be done using:
$> docker-compose up
Once the application is running a CURL command can be used to retrieve the weather data in London, for instance:
$> curl http://0.0.0.0:5000/london/uk/
Running service directly with docker command or even with docker-compose is not recommended for production services because it’s not a production-ready tool. It will neither ensure that your application runs in a highly available mode nor help you to scale your application.
To illustrate better the last point, Compose is limited to only one Docker host and does not support running Docker services in clusters.
As a result, there is a need to use other solutions that provide such feathers. One of the most well known and used solutions is Kubernetes. This tool is an open-source project for automating deployment, scaling, and management of containerized applications. It is widely used by companies and individuals around the world for the following reasons
Kubernetes is a distributed system and integrates several components and binaries. This makes it challenging to build production clusters, at the same time running Kubernetes in a development environment will consume most of the machine resources. Furthermore, it would be difficult for developers to maintain the local cluster.
This is why there is a real need to run Kubernetes locally in an easy and smooth way. A tool that should help developers keep focusing on the development and not on maintaining clusters.
There are several options that can be used for achieving this task below are the top three
$> sudo yum install epel-release
$> sudo yum install snapd
$> sudo systemctl enable --now snapd.socket
$> sudo ln -s /var/lib/snapd/snap /snap
$> sudo snap install microk8s --classic
In case you have a different Linux distribution you can find the instruction on the following page.
$> brew install minikube
$> minikube start
Once Minikube, Microk8s, or Docker for Mac is installed and running, you can start using the Kubernetes command line to interact with the Kubernetes cluster.
The above tools can be used easily to bootstrap a development environment and test your Kubernetes Deployments locally. However, they do not implement all features supported by Kubernetes, and not all of them are designed to support multi-node clusters.
Minikube, Microk8s, or Docker for Mac are great tools for local development. However, for testing and staging environments, highly available clusters are needed to simulate and test the application in a production-like environment.
On the other hand, Running Kubernetes clusters 24/7 for the testing environment can be very expensive. You should make sure to run your cluster only when needed, shut it down when it’s no longer required, then recreate when it’s needed again.
Using Cloudplex, (Disclaimer: The author is the Founder and CEO at Cloudplex), creating, running, terminating, and recreating clusters is easy as pie. You can deploy your first cluster for free . In a few minutes, your cluster is up and running, you can save its configuration, shut it down and recreate it when needed.
In part II of this series, we are going to how to deploy our application to a Kubernetes testing cluster. We are going to create the Kubernetes Deployment for our Flask application and use Traefik to manage our Ingress and expose our application to external traffic.
(Disclaimer: The author is the Founder and CEO at Cloudplex)