Ashan Fernando

Software Architect

How Containers Affect DevOps

Today, we no longer talk about development and operations in isolation. DevOps actively combines these two, which is an essential factor in the modern software lifecycle. Along the way, Docker containers have also become popular due to the benefits they offer for DevOps. Containers affect DevOps mainly in two ways.

DevOps for Containerized Applications

First, if the software application uses containers, this requires specific steps in DevOps to build and deploy containers. Let’s look at the typical steps of a containerized application lifecycle. If we take any containerized application, the application code and the container blueprint (e.g., Docker file) both reside in the same code base. When either the container blueprints or application code changes, it requires to build a new container image. Then, we need to store the container Image in a container registry like DockerHub.
So now you understand that for each new deployment, you need to deploy a new version of the container image that also includes the application code. Think of it as, for each change in the source code, you need to build a fresh virtual machine image, that includes the application code. The good thing here is, these container images are lightweight━mere megabytes, compared to the gigabytes required for a virtual machine image. Therefore building and moving container images are more straightforward in comparison to virtual machine images. I hope I have made my point that deploying containers is different from typical application deployments inside hosted virtual machines.
Now, let’s dive into the details of deploying a container image. To understand this, we also need to look deeper into container runtime environments. These environments, typically called container clusters, are handled by a piece of software known as a container orchestrator.
This abstraction is needed because, we don’t want to deploy a container image locked into a fixed virtual machine (or physical server) for high availability, fault tolerance, and scalability as well as for the portability of containers. Remember, we talked about these containers are lightweight.
You may have heard the names Kubernetes, Docker Swarm or Mesos━these are some of the popular container orchestrators available. If we take Kubernetes, for instance, there are container application platforms which offer built-in support to set up a Kubernetes cluster in minutes. These platforms, like Microsoft Azure, OpenShift Kubernetes, AWS, provides a wide range of features and APIs to simplify the DevOps lifecycle of containers. Handling the underlying complexities in provisioning and managing clusters, cluster nodes, and even providing advanced support for container lifecycle management are some of their unique selling points. Most of these platforms also offer private container registry support and inbuilt CI/CD pipelines.
The orchestrator typically handles the provisioning complexity of deploying new container images to a cluster. To do this, you need to pull the container images from the container registry and provision them.
Note that its not just one container, we are talking about here. It could be tens or even hundreds of different containers that could be running in a single cluster.
So if we take the entire lifecycle described above, this involves a unique set of DevOps operations for building container images, provision containers, and maintain clusters at each step in the lifecycle. In the following section, I’m diving into more detail regarding these steps, from the perspective of DevOps.

Using Containers for DevOps

The second way that containers affect DevOps is that some DevOps operations can utilize containers to make them more efficient.
Building and Publishing Container Images
Building containers are an essential part of DevOps. When it comes to development environments, if the container blueprint (e.g., Docker file) changes, it is mandatory to rebuild the container. Otherwise, it is possible to have an optimized setup where only the application code is built and push into the container running in the development environment. Therefore, it is essential to focus on speeding up the container build step by using scripts and automation.
For container clusters, you need to build the container images outside the development environment in a build server. The CI/CD pipeline typically builds the container image. Properly configured CI/CD pipeline can automatically build the container images and publish them to the container registry as the initial steps.
Some container registries, like DockerHub, simplify the process by providing container Image compilation support. For example, DockerHub provides a clickable option to connect the source code repository (e.g., GitHub) as a trigger to rebuild the image for any code modifications. Publishing the built container image to a container registry is quite straight forward since the container registries typically provide the command line tools or APIs to support it.
Previously, we discussed building the application container image in a host machine. A host machine is typically a development machine or a build server where we have already installed the relevant Docker and application-specific compilation tools.
However, it is also possible to build the container image inside another container, which is one of the use cases for using containers for DevOps.
For instance, using a container to build another container, supports the cross-platform building of the application code and the container image having the exact build environments both in development and build servers.
Besides, in building application container images, inside containers, we can also use containers to run continuous integration (CI) tools like Jenkins.
A more effective way of executing these tests is to do so before merging new code changes to the primary source code repository. Considering Git source control, the best place to perform these tests is when sending a Pull Request. If any test cases fail by then, it should automatically report the status to the Pull Request and prevent it from merging.
Deploying the Container Image to a Cluster
As discussed in the first section of this article, deploying containers to a cluster requires basically to invoke the relevant underlying container platform APIs or orchestrator APIs.
The complex task of scheduling the containers is typically the responsibility of a container orchestrator. Orchestrators provide the requirements for how to define the rules to handle the complexity of scheduling. These rules comprise of the following:
How many instances of a particular container image at runtime. Internal networking rules that require connecting with other containers.Volumes mounted to the containers.Rules specific to container scheduling and lifecycle management on different nodes in the cluster.Rules specific to internal container resource management.
Though this may seem complicated, in terms of DevOps perspective, it is a luxury that the Orchestrator can handle these complexities while we only need to trigger the deployment instruction.
One use case for DevOps using containers as hosts to coordinate the build and deployment of containerized application changes. For example, running Jenkins inside a Docker container could be used for CI/CD. Having multiple instances of Jenkins in containers is especially useful when setting up various CI/CD environments to manage different software projects.
However, this comes with a cost━you’ll need to mount an external volume to keep track of previous build results.
Although I haven’t delved much into post-deployment DevOps operations, containers could also play a significant role there.
For instance, we could deploy containers to monitor other containers as Agents, which work as sidecars to perform cross-cutting operations like log streaming, health, and resource monitoring.

Summary

As you can see, containerized applications benefit from DevOps as well as vice-versa. Since this is an emerging area both in terms of application architecture as well as DevOps, new tools and technologies are continuously coming out to make things more efficient. Therefore, it is essential to keep an eye open to the evolution of existing solutions over time.



Tags

Comments

August 3rd, 2019

That was a great read! You made a vast amount of valid points in your comparisons of virtual machines.
I’ve been working with Amazon ECS and ECR alongside CodePipeline/CodeDeploy.
The combination of the services has enormously simplified a majority of the tasks in which used to take much more time to accomplish. Which goes to show how important CI/CD is within the overall equation.
AWS Fargate is truly superb, I haven’t worked with the service yet, but with the wide array of cutting-edge features in which it brings to containers it only seems logical. If only the pricing was a bit lower :sweat_smile:

August 3rd, 2019

@Jordan Totally agree with you. Lots of things happening around Containers more recently and I have also been using Amazon ECS for a while. How it evolved is truly amazing. With respect to AWS Fargate, its feasible to run short running processes with the current pricing and using it for long running job is bit far until prices goes down :slight_smile: In fact it is also good to execute CI/CD tasks where AWS Lambda is also an competitive option.

More by Ashan Fernando

Topics of interest