paint-brush
8 Fallacies of Continuous Deliveryby@Harness
577 reads
577 reads

8 Fallacies of Continuous Delivery

by HarnessOctober 10th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A quintessential piece for anyone working with distributed systems is the Fallacies of Distributed Computing by L Peter Deutsch. There are eight fallacies of Continuous Delivery, which include the belief that you will always deploy successfully. Technology doesn’t change, and approaches and practices are evolving. Verification is Singular, verification is never over, and rollback cost is zero, and there will be only one syntax for each tool. Perimeter control (e.g. if you can access the Pipeline, you can execute) is not an adequate control.
featured image - 8 Fallacies of Continuous Delivery
Harness HackerNoon profile picture

A quintessential piece for anyone working with distributed systems is the Fallacies of Distributed Computing by L Peter Deutsch. Even when working with modern platforms such as Kubernetes, the assertions made in the Fallacies of Distributed Computing prove to be very true around latency, bandwidth and system administration. 

Continuous Delivery practices and systems are increasing in popularity. When designing, implementing or maintaining Continuous Delivery systems, fallacies do exist. Similar to the eight Fallacies of Distributed Computing, there are eight Fallacies of Continuous Delivery. 

1. You Will Always Deploy Successfully

A common pitfall in any system development is to build for the happy path. Because software requires innovation and iteration, deployments will fail, and a failure and recovery path needs to be accounted for.

In lower environments, confidence-building steps such as automated tests will have a higher failure / not-passing rate, as confidence is built into the deployment and feedback loops allow for corrections to eventually pass the test coverage.   

2. Your Administrators Will Stay

People never stay in the same position forever. Deep expertise in bespoke deployments is at risk with those with tribal knowledge off-board. This also causes a steeper learning curve for those who onboard as platform administrators or onboard their application to the Continuous Delivery system.  

3. Deployments are Always Homogeneous 

A deployment is a culmination of potentially multiple teams and their respective services. There are several approaches to deployment, but because of variations in the scope of changes, rarely are two changes exactly the same. Certain deployments require downtime, while others may require a rolling or canary release strategy. 

4. Rollback Cost is Zero 

The time to decide or make a judgment call to rollback or roll forward certainly carries a cost. Depending on the criticality of the impacted system(s), the clock is ticking, battling the technical point of no return and impact to the business. Once a rollback or roll forward decision is made and executed, validation still needs to occur.

5. Technology Doesn’t Change

The only constant in technology is that there will be change. New paradigms are always appearing, and approaches and practices are evolving. Looking at the introduction and adoption of Kubernetes to enterprises, organizations need to maintain the legacy and new applications which add to the number of concurrent supported technologies. 

6. Verification is Singular 

Once your deployment clears the pipeline, verification is never over. Validating the initial state and performance immediately after deployment is just part of the reliability puzzle. Once in the wild, monitoring and verification need to continuously take place, along with regression monitoring, which could happen days later. 

7. There Will Be Only One Syntax 

There is a need to orchestrate several disparate tools, and each tool has a separate syntax. For example, authoring a Continuous Delivery Pipeline has a different syntax than infrastructure-as-code, which has a different syntax than a Kubernetes Resource. Tying all these areas of expertise together should respect the underlying syntaxes, and the expertise required for each. 

8. Access to Pipelines Are Secure

When marching towards production, Continuous Delivery Pipelines have elevated privileges to execute a deployment. Like any system that has the ability to make an impacting change, good rigor is required. Perimeter control (e.g. if you can access the Pipeline, you can execute) is not an adequate control. Auditability and RBAC controls are necessary. 

Continuous Delivery is an evolutionary journey that many organizations are embarking upon. A core conduit of innovation from idea to production, your Continuous Delivery Pipelines have significance. As more organizations embrace Continuous Delivery, being aware of the fallacies helps those make more robust systems and practices.