One such trend is that companies must work to find effective and creative ways to keep and appreciate hired talent.
This is because a high turnover rate is expensive. The onboarding time of new hires and the information lost when an employee leaves a company is notable.
The main topic of this article is to examine strategies that are useful to enabling effective engineering by following industry standards. Furthermore, this piece explores the value of making data-driven decisions to understand engineering bottlenecks.
Engineers are busy writing code, delivering new products and stable code releases, and working on bug fixes and patches. The question remains: What are the most effective ways to understand engineering metrics?
Is it Git Commits? Or is it a contribution across several different systems?
How can engineers show the outcome of their work?
The more important question to ask is, how can we use data to understand engineering bottlenecks?
Before going into the solutions, I want to focus on
“DORA is the official name of the team (now part of Google Cloud) that surveyed over 30,000+ engineers on DevOps practices for 6 years and came up with the following 4 metrics:"
These metrics can help to identify Elite, High, Medium, and Low performers. If you follow the Google article then you will notice that it focuses on GitHub or GitLab as initial data sources. In addition to basic Git or GitLab, enterprises are using various other tools like Jenkins, SonarCloud, Zendesk, Asana, AWS Codepipeline, and the list goes on.
It’s challenging to connect to several systems, collect and aggregate the data with a common data model or schema, and then produce the metrics that many organizations need.
To give you a better understanding, take a look at the following catalog from Faros.
The challenge is to integrate all these systems into one and understand the metrics to determine the bottlenecks and also, go beyond just DORA. To make it easy to understand, we can focus on the following 2 categories presented by Faros.
Let’s explore a couple of options that can be used to go beyond basic DORA metrics.
Let’s build it on our own.
Start creating scripts, data pipelines to gather data from sources, store it into your pub/sub and then get into a central store and use dashboarding tools to display data.
It’s simple only if you have limited data sources. Yes, there is some operational cost to maintain the pipeline and overall support the system.
Signup for options provided by various vendors in this space to get to the next level of DORA.
The challenge can be cost justification but remember, in today’s time, one of the most expensive things in any tech and data-driven organization is the engineer’s time.
The DIY approach highlight above means more engineer time is spent on building a basic data pipeline as it supports only limited data sources.
The main idea of this article is to raise awareness in the community that we as engineers, need a tool to make data-driven decisions to measure and improve productivity.