Delivery Pipelines as enabler for a DevOps culture

Written by aleicher | Published 2018/07/19
Tech Story Tags: devops | culture | continuous-delivery | docker | kubernetes

TLDRvia the TL;DR App

A lot of teams and organisations are embracing DevOps culture for building digital products and delivering business value with digital services. DevOps culture promotes an agile, iterative approach, with a strong emphasis on continuous delivery of working software, in close alignment with business goals. The term is already 10 years old, still most companies and teams are now on the journey to put it into practice.

DevOps is based on the idea to reduce the time it takes between committing a change to the code and this change being made available to users (in a way it provides value to the users).

Or to even expand on this notion: reduce the time it takes between learning about how users are experiencing your product today, and creating a better user experience by shipping an improved version of your product.

DevOps with a User Experience Perspective: it’s not only about how fast you can deliver from code change to user value, but how fast you can turn a bad user experience into a great user experience and continuously improve your product.

The perspective of DevOps as a continuous improvement of the user experience of your product means you put your users at the center of your work, by continuously going through the full cycle of product development: plan, code, build, test, release, deploy, operate, monitor, and then plan again based on the observations you made.

The DevOps Continuous Lifecycle: the notion of continuous product improvement

When you look at the phases in the DevOps Lifecycle it is important to mention that this is more than a technical perspective. Roughly it can be broken down in two areas. The left side (code, build and test) focusses on the creation of new product value, the right side (deploy, operate, monitor) focusses on the value delivery to the user. Especially phases that are user facing (the right side of the loop), shall be used to monitor user behaviour, understand user experience, and learn about how your product is used. The transition between the two loops, of value creation and value delivery, focusses on planning the next iteration and making sure you release the value you wanted to create.

If our goal is to minimise the time (and effort) to come from a bad user experience to a better user experience (or from a defect to a fix, from a request to a new feature), we also shall look at the factors that contribute to that goal. Obviously culture is key, to generate the environment where this notion is understood and supported. At the same time, tools and toolchains play an important role along this value chain.

Tools along the lifecycle allow to:

  • ease communication to create a better shared understanding
  • automate recurring tasks, and thus reduce error
  • not only fix issues, but add tests to prevent same issues from coming up again
  • understand how users use your product
  • prioritise next releases and features based on actual user feedback and business value

The notion of a delivery pipeline that assists the team in eliminating waste, and deliver value faster and better is very often expressed in concepts such as Continuous Integration and Continuous Delivery (CI/CD). By automating infrastructure, tests and deployments, by leveraging cloud functionality it is possible to bring down the time between code being written and being made available.

Most of the CI/CD work focusses on the left side of the DevOps cycle. It has proven that container based architectures in combination with cloud technology provides huge benefits, in such a sample environment:

  • Communication: Messaging apps like slack or flock have proven to be very useful at keeping everyone on the same page. Especially since it allows the integration of relevant events into the messaging channels.
  • Plan: Tasks are tracked, planned and prioritized in tools such as JIRA or trello
  • Code: code is kept in git version control so all developers can collaborate. Even during development, docker containers are used to prevent the ‘worked on my machine’ syndrome
  • Build: builds are created automatically using Continuous Integration like CircleCI, Jenkins or gitlabCI. The containers which are built are pushed to a container registry, so every commit creates a build. Since it is automated, no developer time is consumed. Also: there’s immediate feedback if a build fails.
  • Test: automated tests are a great way to make sure you don’t have to fix the same bug twice. While concepts like Test Driven Development (TDD) are great, not all teams have the budget or time to really start with tests first. In an environment of continuous improvement however, it shall become a habit to add a test, anytime a bug is found. Since you minimise time to ship, having a bug in production might not be as critical if the next deploy is only a minute away. By using cloud technology, such as Kubernetes, you can even spin up a test environment for your QA team before each release
  • Release: if the desired value is captured in the new version, even a non-technical person can release, by just adding a tag to the current version in version control. The CI system then takes care
  • Deploy: Especially in combination with container technology such as docker and kubernetes, the CI can now just apply updated manifest files to the kubernetes cluster, and by that make sure that the new version will be deployed/pulled from the container registry. Containers make sure, you have exactly the same software that the developer used, and that was tested before. Also technologies like terraform or ansible might come to play, in case you bring up new infrastructure via code.
  • Operate: operations encompass performance monitoring, logging, and making sure the engine runs. Kubernetes provides the container orchestration to automatically restart failed deployments, tools like fluentd, elasticsearch, kibana, and prometheus allow for continuous monitoring of all applications. The product team is constantly aware of the system health
  • Monitor: not only system performance, also UX. Use techniques like A/B testing, page speed, using Google Analytics, AppDynamics, Optimizely and the like
  • The results form monitoring and operations then feed back into the plan of the next iteration, helping the product team to prioritise

DevOps Pipeline Tool Overview (selection of tools, note: image/logo rights are with the respective copyright owners)

note: the products and technologies mentioned above are not the only ones, but have proven to work pretty well in projects.

While DevOps is not about the technology you apply, the delivery pipeline can enable the product team to deliver value continuously. In a second part of this article, I’ll share a more technical view on the architecture of a sample delivery pipeline.

Credits:

  • header image: by Farzad Nazifi on Unsplash
  • brand and product logos belong to their respective copyright holders

Published by HackerNoon on 2018/07/19