Comparing Container Pipelines
I run Dev Spotlight - we create tech content for tech companies like Heroku, Rollbar, and more.
Containers brought a monumental shift to DevOps by allowing teams to ship code faster than ever before. However, we still have to go through the process of building, packaging, and deploying those containers. That's why we use container pipelines.
However, there are many different choices when it comes to container pipelines. How do we know which one to use? In this article, we'll compare six choices, and cover the configuration, benefits, limitations, and pricing of each.
What Are Container Pipelines?
First, let's talk about what a container pipeline really is. Pipelines help to automate individual stages in the software development process, particularly continuous integration and continuous delivery (CI/CD).
Container pipelines automate each of the stages in the container deployment process, from building the initial image to deploying to production.
Typically, the entire container pipeline consists of three stages:
- Integration: changes are checked into source control, triggering the build process and unit tests.
- Acceptance testing: the container is deployed to a test environment and verified for functionality.
- Deployment: the final, fully-tested image is deployed to production.
Container pipeline tools typically offer at least two out of three, but they can vary.
Now let's look at our six choices.
is a complete container pipeline leveraging Docker
. You can build, test, validate, and deploy containers all on the same platform without having to provision hardware or leverage different service providers.
Heroku applications are configured using a heroku.yml
manifest, which defines the steps required to build and deploy a container. A manifest for an application with a custom Dockerfile might look like this:
$ heroku stack:set container
$ git commit -m "Add heroku.yml"
Heroku also supports pipelines
, which allow you to deploy a container to multiple different environments to mirror the stages in a continuous delivery workflow. For example, you could use pipelines to test changes in a staging environment before deploying to production.
Benefits and Limitations
Heroku is extremely easy to use, requiring just a single YAML file for the entire pipeline. It’s fully managed, provides multiple environments for testing and deploying changes, and even lets you rollback changes
in case of a bad deployment.
However, not all of Heroku’s features support Docker deployments. For instance, you can’t use Heroku CI
to run your application’s test suite, which means either running the test suite while building the image or using multi-stage builds
. You also can’t use pipeline promotions
to promote a container from one pipeline stage to the next. Instead, you must redeploy the container to the target stage.
Heroku offers a free plan
with 1,000 free runtime hours per month for one web dyno and one worker dyno. Paid plans start at $7 per dyno/month and provide additional features such as larger capacity dynos and improved scalability. For more information, see Heroku’s pricing page
Heroku is a very easy and cost-effective container pipeline solution. It gives you full control over the CI/CD process while providing a fully managed environment. With a free tier and free standard support
, it's worth trying out.
2. Azure DevOps
is Microsoft’s all-in-one service for project management, source code management (SCM), and CI/CD. It allows you to control nearly every stage in the DevOps lifecycle while offering many advanced container-specific features, including private container registries
and integration with Azure Kubernetes Service (AKS). Azure Pipelines
provide the platform’s CI/CD service.
All of Azure DevOps can be managed using the web-based user interface, but you can also configure Azure Pipelines using a YAML-based manifest checked into your application source code. The web UI lets you manage and track deployment environments and release versions, artifacts, and more.
Benefits and Limitations
If your team already uses Azure, then Azure DevOps is a natural extension to your existing workflow. It supports both managed and on-premise installations, and also supports a number of Azure deployment targets including Azure App Service, Kubernetes, and Azure Functions.
However, integrating with other services (including Azure services) isn’t straightforward. Configuring an integration in Azure DevOps requires you to copy and paste values, even from services like Azure Container Registry, making it feel less cohesive and more difficult to set up.
Azure Pipelines offers a free tier with one free concurrent CI/CD job and 1,800 minutes per month. Additional jobs cost $40, and hosting artifacts (such as images) costs $2 per GB per month. Additional services, like Azure Boards, come with additional monthly fees. To learn more, visit the Azure DevOps Services Pricing
Azure DevOps is great for teams that want an all-in-one DevOps management solution, or who already use Azure. It greatly simplifies the development lifecycle by centralizing it in a single location. However, it can be difficult to set up and it may be overly-complex for teams that just need a basic container pipeline.
3. GitLab CI/CD
started life as an open-source SCM, but quickly grew into a complete DevOps management solution. Like Azure DevOps, it provides features such as project management, private container registries, and orchestrated build environments (including Kubernetes).
GitLab CI/CD is powered by GitLab Runner
, which executes each step in your CI/CD pipeline in a self-contained environment. Configuration is done via a .gitlab-ci.yml
manifest, which supports some advanced configurations including conditional logic and importing other manifests.
Alternatively, you can automate your entire pipeline with no configuration using Auto DevOps
. GitLab automatically determines how to build your application based on its source code (in this case, a Dockerfile) using Heroku buildpacks
). Auto DevOps can automatically run unit tests, perform code quality analyses, and scan images for security issues.
For deployment, GitLab uses the dpl
tool, which supports a wide range of providers including cloud platforms and Kubernetes clusters.
Benefits and Limitations
GitLab offers an extremely flexible pipeline that you can either configure yourself or fully automate using built-in tools. The YAML configuration allows for a greater range of project structures and steps, such as creating project dependencies and combining multiple pipelines
from different projects. Because GitLab uses existing open-source tools like Herokuish and dpl, it supports a wide range of project types, languages, and deployment targets.
Although GitLab can deploy Runners and artifacts to existing environments, it can’t provision or maintain those environments itself (except for Google Kubernetes Engine and Amazon Elastic Kubernetes Service). It also lacks a graphical pipeline configuration tool, which can make pipeline management less intuitive than with tools like Azure Pipelines.
GitLab uses an open core model: it offers an open-source base version and a paid enterprise version with additional features. For paid plans, pricing
is tiered based on the number of users, the number of minutes spent running CI pipelines per month, and access to certain features. All plans include unlimited code repositories, project planning tools, and 2,000 free pipeline minutes per month. Paid plans range from $4 per user/month to $99 per user/month.
GitLab is an incredibly versatile and powerful CI/CD tool that packs extremely useful features. The open-source version is feature-rich enough to compete with many commercial options while also letting you self-host. However, it does require you to maintain a separate deployment environment.
4. AWS Elastic Beanstalk
is less of a pipeline and more of a tool for orchestrating AWS resources. It can automatically provision, load balance, scale, and monitor resources like ECS containers, S3 buckets, and EC2 instances. This allows you to create a completely custom pipeline within AWS depending on your specific requirements.
A Beanstalk configuration describes both how to deploy a container and the environment in which the container is deployed.This is defined in a Dockerrun.aws.json
file. Beanstalk introduces unique concepts
, such as:
Application: a logical collection of Beanstalk components such as environments and versions.Application version: a readily deployable version of your source code.Environment: the set of AWS resources needed to run an application version.Benefits and Limitations
Beanstalk is an extremely powerful tool not just for Docker, but for AWS. It provides auto-scaling, rolling updates, monitoring, and release management. It also lets you access and manage resources directly.
However, Beanstalk is more complex than normal pipelines. You need to prebuild and host your Docker images in an image repository, unless you’re using a single container environment,
and container versions are tightly coupled to environments. You can only trigger updates via the Beanstalk CLI; therefore, if a container fails, you need to address it manually using the Beanstalk console.
Beanstalk itself is free, but the AWS components it provisions are priced at their normal rates. For instance, if you configure your environment with an ECS node and ELB load balancer, you’ll be charged for the node and the load balancer as if you provisioned them normally.
With the vast array of AWS services available, Beanstalk provides a great way to manage all of them. It can be extremely powerful when used as an orchestration tool, but it may be too complex to use as a container pipeline.
5. Google Cloud Build
is a relatively basic container CI service built on the Google Cloud Platform (GCP). It can build images directly from source code or Dockerfile and deploy directly to GKE, Cloud Run, and other GCP services.
Cloud Build is configured via a cloudbuild.yaml
(or JSON) file. You can define the process for building images, as well as where to store the resulting image. For example, building and pushing a Docker image to Google Container Registry is as simple as:
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/myimage', '.']
Cloud Build supports triggers
, which automatically start builds on changes to your source code.
Benefits and Limitations
Because Cloud Build is built around GCP, it only supports a limited number of deployment targets. Deploying containers to other platforms is possible, but requires additional steps. In addition, like GitLab, Cloud Build doesn’t have a visual pipeline configuration tool.
is based on the size of your build machines and build time. A standard n1-standard-1 instance costs $0.003 per build-minute, up to $0.064 on an n1-highcpu-32 instance. You also get 120 free build-minutes per day on an n1-standard-1 instance.
Cloud Build is relatively simplistic, but that’s also one of its strengths. It’s fast, easy to learn, fairly inexpensive, and integrates well with other GCP services. If you already have a deployment environment, or already leverage GCP, I recommend trying it.
6. Jenkins X
is one of the most popular CI/CD tools available, and Jenkins X extends it further by adding comprehensive Kubernetes integration. Jenkins X doesn’t just deploy to Kubernetes, it can also provision and manage Kubernetes clusters for you.
Jenkins X Pipelines are built on Tekton Pipelines
, which aid in running CI/CD pipelines on Kubernetes. You can configure your pipeline using a jenkins-x.yml
file (compared to a traditional Jenkinsfile). Jenkins X also provides build packs
, which can help package source code into images that can then be deployed to Kubernetes.
Benefits and Limitations
Jenkins X leverages two popular existing projects—Jenkins and Kubernetes—to create a scalable CI/CD platform. It can automate the entire CI/CD pipeline and supports preview environments and pipeline promotions. Because it includes Jenkins, it has access to the entire community of Jenkins developers.
However, Jenkins X requires Kubernetes and is very opinionated about how the cluster is configured. The command line tool automates much of this process, but it’s an important consideration.
Jenkins X is open source.
For teams using Jenkins, Jenkins X will feel like a natural progression. It has some strict limitations and requirements, but for teams using Kubernetes, having a tool that natively integrates with your infrastructure can be a benefit.
Let's look at a quick comparison:
For teams wishing to simply deploy and host Docker containers in a stable environment, Heroku is hard to beat. It offers a fast and configurable platform, supports a wide range of integrations, and has a massive marketplace of third-party add-ons. Elastic Beanstalk is a close second for its ability to orchestrate AWS resources, and is the recommended choice for teams with more complex requirements.
For container CI, GitLab is arguably the most comprehensive option due to its sheer number of features, Auto DevOps capability, and open core model. Google Cloud Build leverages the speed and capacity of the Google Cloud Platform for fast builds, and Jenkins X benefits from being part of the Jenkins project. Most of these services are either open source or offer free trials, so we recommend trying them and seeing which works best for your workflow.
Subscribe to get your daily round-up of top tech stories!