Container Driven Development

Written by hackernoon-archives | Published 2019/03/17
Tech Story Tags: containers | pytest | docker | python | docker-compose

TLDRvia the TL;DR App

The intent of this post is to introduce a workflow for developing and testing software which is tightly coupled with a containerized environment. I like to call this workflow Container Driven Development . The core idea behind Container Driven Development is to write, execute and test every line of code inside a containerized environment from the very beginning of the application development cycle.

Typically, we first write and test our applications and then work on containerizing them right before shipping or deployment. For most of us, the first time our application actually runs in a container is in production. This is not ideal as it causes issues with containerization to be discovered too late in the development cycle.

An alternative approach is to introduce containers from the very get-go of the application development process. The biggest advantage this has is that it enables us to develop and test our application in an environment that could be made to look and behave exactly like our production environment.

The remainder of this post uses a toy application to demonstrate how I typically do Container Driven Development. Since I spend most of my time developing in python, the post is specific to a python based workflow. How a similar approach could be adopted for other programing languages.

Project Setup

Following is my typical directory structure for doing Container Driven Development on a python app called myapp. The most important files in this directory are the docker-compose.yml and the Dockerfile. We will look at the contents of each one of these in detail.

myapp
├── Dockerfile
├── docker-compose.yml
├── myapp
│   ├── __init__.py
│   └── main.py
├── requirements.txt
└── tests
    ├── __init__.py
    └── test_main.py

Docker Compose

Seldom do I have to write stand alone applications. Most of the applications I write have dependencies on other systems — most ubiquitous of these being a SQL database. With the advent of docker, I rarely find myself installing software. Instead, I pull and use the respective docker containers where ever possible. Docker Compose is a great tool for managing multi-container applications and I make heavy use of it. My typical docker-compose file for an application (myapp in this case) looks like the following:

<a href="https://medium.com/media/51b590d698a83516a1aac03ffeaac549/href">https://medium.com/media/51b590d698a83516a1aac03ffeaac549/href</a>

The services node in the docker-compose file is most important. It is here where I describe the build configurations of my application( myapp) and its dependencies.

The first service node in the docker-compose file is myapp. This as you might have guessed, is the service definition corresponding to the application being built

The myappnode has four child nodes which are of interest: build , image, depends_on , and volumes .

The build node defines the build configuration. As part of this configuration, I have provided a build time argument by the name app_name . We will see how this argument gets used in the Dockerfile momentarily. Finally, I have specified the build context to be the current directory. The context simply denotes the location of the Dockerfile for the service to be built relative to the docker-compose file.

The image node configures the naming of the service being built. In this case, I have defined the name to be of the form myapp:${TAG}. TAG is an environment variable which will be defined at build time.

The depends_on node defines upstream service dependencies. Services listed as part of this node must be defined in the docker-compose file. The depends_on configuration ensures that the dependent services get pulled in and started prior to the start of the application service.

Finally, the volumes node ensures that the source code directory gets mounted onto an appropriate location within the application container. In this case, the source code directory of myapp is being mounted to /myapp directory within the application container. Mounting the source code directory onto the application container is essential as it enables me to edit, run and test my code from within the container as opposed to copying the source files into the container each time I make edits.

In addition to the myapp service, the docker-compose file also includes service nodes for redis and postgres . Since we are not building these services from scratch ourselves, all that is required is the image name and tag we need for each of these services. The respective image and tags will be pulled down from Dockerhub during runtime.

Dockerfile

My Dockerfiles are usually fairly simple unless I am working on something more esoteric:

<a href="https://medium.com/media/4883717f3237402aaa417eb1c17c63a5/href">https://medium.com/media/4883717f3237402aaa417eb1c17c63a5/href</a>

In the Dockerfile above, I simply pull down a base python image, add the source code directory to the container and install the relevant python packages. Notice the ARG app_name statement in the Dockerfile . This basically declares a build time argument whose value we are setting in the docker-compose file above — i.e app_name: myapp. The entry point CMD will vary depending on the type of application but typically it is something similar to what is shown in the Dockerfile above.

Workflow

Once the docker-compose and Dockerfile are setup, the workflow from here on is pretty straight forward and just boils down to a few commands.

The starting point for me is to get a terminal shell inside my application docker container. Doing so can be done by the simple docker-compose command:

TAG=development docker-compose run myapp bash

The above command does several things. It pulls down (if not already) and starts up the upstream dependencies of the application ( redis and postgres containers in this case). It builds the application container myapp and tags it with the development keyword. It creates a bridge network between the redis , postgres and the application container myapp and opens up a shell prompt in the myapp container. From here on, I can write and execute code in this shell like I normally would in any other terminal shell. The only difference is that this shell is executing inside an isolated docker container with all my source code mounted and it’s respective dependencies installed. How cool is that!

I develop code in a VI / TMUX environment. Following is brief screen cast of how my development environment looks like with this workflow.

In the screencast, the top pane has my VI text editor with the source code loaded in. The bottom pane has a terminal shell running inside the myapp container. I can simply edit code in the text editor and execute it within the container. Life is good :)

Occasionally, instead of getting a terminal shell inside the application container and then executing the test runner, I execute the test runner directly for all tests or a subset of tests via docker-compose :

TAG=development docker-compose run myapp pytest tests/test_main.py

The above command simple starts the application docker container (and the associated container dependencies if not already running), executes the test runner command pytest tests/test_main.py inside the container and exits. Here is a brief screencast of how that looks in action:

In summary, I think developing inside containers from the very beginning of the application development environment has several advantages. Most notably being the ability to write and test code in a production like runtime environment. This prevents bugs and issues with containerization from being discovered too later in the game.

Hopefully, the workflow presented in this post makes it a bit easier to make containers an essential component of your development lifecycle.


Published by HackerNoon on 2019/03/17