paint-brush
The Ultimate Guide to DORA Metricsby@hatica
758 reads
758 reads

The Ultimate Guide to DORA Metrics

by HaticaNovember 29th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Google’s research program run by the DORA team conducted a survey of thousands of development teams from a variety of industries to determine what distinguishes a high-performing team from a low-performing one. The study results and data have become a benchmark for individuals responsible for tracking DevOps performance in their organizations. The four key metrics that indicate successful software development and delivery performance: Deployment frequency (DF), Mean lead time for changes (MLT), Mean time to recover (MTTR), and Change failure rate (CFR).

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - The Ultimate Guide to DORA Metrics
Hatica HackerNoon profile picture


The digitized, modern economy is driven by companies’ ingenuity in delivering products that engage customers and generate revenue and their ability to produce high-quality outcomes efficiently. To stay on top of their game, engineering teams are increasingly embracing DevOps and continuous integration/continuous delivery (CI/CD) to sustain successful software delivery.

As this accelerated adoption of DevOps processes and tools continues, dev team leaders face key questions on measuring the impact and ROI of DevOps processes. Leaders in product and engineering are assessing how they can manage their software delivery capacity given the distributed nature of the teams, technologies, and apps involved.

In software development, it's nearly impossible to see how each piece of the dev process fits together without reliable data points to track across teams. As teams are spread across the globe and are broken down into multiple different smaller teams working on smaller parts of a larger project, it becomes even more difficult to tell who is doing what and when, what the blockers are, and what exact issue is causing a delay.


The DORA metrics, as put forth by Google’s DORA team, serves as a north star to DevOps teams by equipping teams with the information they need to have visibility and control over their software development process.

What are DORA Metrics?

In 2018, Google’s research program, run by the DevOps Research and Assessment (DORA) team, surveyed thousands of development teams from various industries to determine what distinguishes a high-performing team from a low-performing one. The research team presented the following four key metrics that indicate successful software development and delivery performance:


  • Deployment frequency (DF),

  • Mean lead time for changes (MLT),

  • Mean time to recover (MTTR), and

  • Change failure rate (CFR).


The DORA study results and data have become a benchmark for individuals who are responsible for tracking DevOps performance in their organizations.
They are vital to track since they assist DevOps and engineering leaders in determining software delivery throughput (velocity) and stability (quality). They demonstrate how development teams can offer better products to clients more quickly. These metrics give leaders tangible data to assess the organization's DevOps performance, report to executives, and suggest improvements.

The DORA metrics also go beyond serving as just metrics; They provide visibility into dev activities and their corresponding successful outcomes, enabling developers to increase the productivity of their teams. DORA metrics allow engineering managers and leaders to nurture high-performing teams by providing visibility into present-day performance and equipping the leaders to draw a roadmap for improvement.


Use DORA metrics to better dev productivity - Hatica



The Four Key DORA Metrics

Deployment Frequency


Deployment Frequency from DORA Metrics dashboard - Hatica


The frequency with which a team pushes changes to production is measured by Deployment Frequency. This metric represents the pace with which your team delivers software.
According to DORA, high-performing teams try to ship smaller, more frequent deployments. With this process, customers benefit from faster time to value, while the development team benefits from lower risk (smaller changes mean quicker fixes when changes cause production issues).


Change Lead Time


Cycle time from DORA Metrics dashboard - Hatica


The overall period from when work on a change request begins to when it is put into production and consequently given to the customer is known as Change Lead Time. You may use lead time (not to be confused with cycle time/lead time) to determine your development process's efficiency. Long lead times might be caused by an inefficient procedure or a bottleneck somewhere in the development or deployment pipeline.


Comparing the time of the first contribution of code for a specific issue to the time of deployment is the most typical means of assessing lead time. Comparing the time an issue is picked for development to the time it is deployed is a more thorough (but also more difficult to pinpoint) technique.


High-performing teams track lead times in hours, whereas medium - and low-performing teams track them in days, weeks, or even months. Test automation, trunk-based development, and working in small batches are crucial to reduce lead time. These methods allow developers to get immediate feedback on the quality of the code they contribute, allowing them to see and fix any flaws. If developers work on major modifications that reside on distinct branches and rely on manual testing for quality control, long lead times are nearly certain.


Change Failure Rate


Change Failure Rate from DORA Metrics dashboard - Hatica


The ratio of the number of deployments to the number of failures is known as the Change Failure Rate. However, defining what constitutes a failure is crucial. This DORA statistic will be unique to you, your team, and the service you provide. In reality, as your team develops, it will most likely alter.
The most typical error is to focus on the total number of failures rather than the change failure rate. The issue with this is that it will encourage the wrong behavior. Our goal is to ship change as rapidly as possible. If you're just looking at the overall number of failures, your natural reaction is to try to limit the number of deployments to reduce the number of incidents. As previously said, the difficulty with this is that the modifications are so significant that the impact of failure, if it occurs, will be significant, resulting in a poor customer experience. When a failure occurs, you want it to be so minor and well-understood that it isn't a huge concern.
The failure rate of elite teams is 0-15 percent. Reduced deployment failures will significantly influence a team's overall productivity. Everyone wants to spend less time on hotfixes and patches and more time on creating outstanding products.

Mean Time to Recovery (MTTR)


MTTR from DORA Metrics dashboard - Hatica


The incident response process includes several steps, and MTTR is just one of them. First, you must determine whether or not there is an issue. How quickly can you ship a replacement once you've found it? This is what MTTR is all about.


The time it takes to detect a problem is a metric in and of itself, known as MTTD (Mean Time to Discovery). If you can spot a problem quickly, you can reduce MTTD to almost zero, and since MTTD is factored into the MTTR calculation, improving MTTD also helps you increase MTTR.
High-performing teams recover fast from system failures, usually within an hour, whereas lower-performing teams can take up to a week to recover.


The capacity to swiftly recover from a failure is dependent on the ability to recognise a failure as soon as it occurs and release a remedy or roll back any changes that caused the failure. This is normally accomplished by continuously monitoring system health and notifying operations personnel whenever a failure occurs. To resolve incidents, operations employees must have the proper protocols, tools, and permissions.


DORA Metrics overview by Hatica


Other related metrics


Cycle time is another important metric to consider. This is the amount of time a team spends working on an item until it is ready to ship. Cycle time in software development refers to the time it takes from when developers make a commitment to when it is deployed to production.
Check out our blog post on cycle time: https://www.hatica.io/blog/cycle-time/

DORA Metrics and Value Stream Management


A value stream symbolizes the continuous flow of value to consumers, and value stream management assists an organization in tracking and managing this flow from concept to delivery. DORA metrics are intrinsically tied to value stream management in every software company. The many parts of end-to-end software development are linked and measured with proper value stream management to ensure that the complete value of a product or service reaches clients efficiently.


The purpose of value stream management is to offer high-quality software at the speed that your customers need, resulting in increased value for your company.


DORA measurements serve as a foundation for this approach because they are a tried and true set of DevOps benchmarks that have become industry standards. They discover inefficiencies or waste, which you can use to streamline and eliminate bottlenecks in your workflows. When your teams' DORA metrics improve, the entire value stream's efficiency improves as well.


💡 Continuous improvement is a core tenet of DevOps teams. Hatica equips dev team with the ability to measure and track performance across lead time for changes, change failure rate, deployment frequency, and MTTR allowing teams to accelerate velocity and increase quality. Request a demo →


Originally published here.