paint-brush
What Are SEIPs? The New Way Engineering Leaders Measure Successful AI Adoptionby@alex.circei
282 reads

What Are SEIPs? The New Way Engineering Leaders Measure Successful AI Adoption

by Alex CirceiApril 24th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Software Engineering Intelligence Platforms (SEIPs) track performance not just based on task completion. They leverage multiple data sources, create unique metrics using aggregated data, carry out complex analysis and have predictive capabilities. Unlike EMPs, SEIPs will integrate contextual data that helps to understand the value of people’s work.
featured image - What Are SEIPs? The New Way Engineering Leaders Measure Successful AI Adoption
Alex Circei HackerNoon profile picture

Every tech company in the world has a software development team, and right now, 70% of them are considering how to use AI tools in the development process itself. Should they integrate AI co-pilots, outsource parts of the work to code generators, or develop bespoke tools in-house?


One of the key determinants in this process is, simply - whether the tool is leading to positive results. For this, you need to run a proof of concept, while measuring team performance over time. The problem is that the old technologies developers use to track performance are 2D in their insights. New technologies are revolutionizing how engineers assess their team’s success.


The old guard of performance trackers are known as management systems or engineering management platforms (EMPs) - think Jira or Azure DevOps Boards. But a new iteration is emerging with one crucial additional layer: intelligence.


Software Engineering Intelligence Platforms (SEIPs) are like EMPs with an added element of intelligent data gathering and analysis. They track performance not just based on task completion. Instead, they leverage multiple data sources, create unique metrics using aggregated data, carry out complex analyses, and have predictive capabilities.


It is these 3D platforms that are most suited to closely measuring the success or failure of AI adoption in teams at the forefront of technological advancements. This is how SEIPs work.

1. Layered Data Points - The What

EMPs are limited in their insights because of the narrow data they focus on: essentially, they can tell you about the status of a task, and the time it took to complete it.


As well as these basic engineering metrics, SEIPs integrate several other layers of data. They capture all the engineers’ work across platforms: what’s happening on Jira, their activity on GitHub, what’s being recorded on DevOps tools such as Jenkins, and more. SEIPs then translate these signals into metrics to give an otherwise impossible level of insight into what’s happening within your team.


If you were to assess the success of an AI product by just looking at the rate of task completion, you might reach a skewed conclusion. For example, more tasks may have been completed within the time frame. But by connecting the SEIP to other tools like Jenkins or GitHub, you may find that your Change Failure Rate (CFR) - code changes that lead to failed deployments - has simultaneously increased, creating a new vulnerability.


Furthermore, unlike EMPs, SEIPs will integrate contextual data that helps to understand the value of people’s work, such as project costs and engineer salaries. That allows managers to measure the ROI and business outcomes of a specific tool you’re experimenting with.

2. Measuring the Invisible - The Where

Without a SEIP, measuring the *impact *of AI deployment is extremely hard, as all you have is a black box.


Today, most engineering teams’ performance is measured in ticket completion: such as how many tickets were completed in a sprint. But the productivity of AI tools can’t just be measured in tickets completed - it’s more nuanced than EMPs allow for.


With SEIPs, you can quantify more than whether a team was able to complete a task, and with what speed. They can also measure the impact and quality of work, or the less “visible” elements of an engineer’s work. SEIPs look at metrics such as DORA Metrics, Cycle Time, and how much-unplanned work is being generated.


This can show if work is improving, if more code is being deployed than before, and if it includes fewer failures. SEIPs can also shed light on correlations between metrics that EMPs can’t.


SEIPs can also measure developer experience (DX) - an often overlooked but critical metric relating to developer well-being. This can be done by looking at specific metrics, such as deployment frequency - if this grows, it could suggest increased enthusiasm and engagement within the team.

3. Analytical Metrics - The Why

EMPs can tell you that a task was completed in 5 days, rather than the benchmark of 2 days, but they can’t show you why velocity slowed down.


SEIPs allow managers to zoom in on the ‘why’ metrics or causal factors: were there any bottlenecks? Did certain internal events over the past week slow down operations across the board?


For example, a slowed-down sprint time after AI integration might not be due to an engineering team working more slowly, but because the code review time is way longer than the benchmark. Perhaps, the slower sprint time was due to the code reviewer being overwhelmed by the amount of new code because the AI tool increased developer output so much that they couldn’t keep up.


That tells you both that the AI was successful in part and that changes are needed internally (you need more code reviewers) to make full adoption efficient.


SEIPs have another important advantage - they can provide recommendations on how to improve the situation based on that same granular data. So if your AI deployment is being hindered by a specific bottleneck, the SEIP could provide recommendations to remove it and take fuller advantage of the AI tool’s benefits.

4. Internal Benchmarks - The Who

It doesn’t always make sense to only compare your engineers’ work to benchmarks that are the industry standard - which is something that often occurs with EMPs.


If you’re assessing the effectiveness of a new AI tool you need to make sure it’s making your team better than they were before the deployment. SEIPs gather historical data from your team in order to create internal benchmarks of what constitutes a satisfactory commit count or cycle time. It does this by reviewing previous performance and identifying in what circumstances the team performs best - all of which can provide better conditions for predicting the AI deployment’s real impact.


Your AI deployment could have its greatest potential in specific areas where you’re already meeting industry benchmarks. But if you’re already hitting the industry average, you may not focus on tracking impact in that area. A SEIP could show you how the AI is making that process 200 or 300% more efficient.

5. Predictive Capabilities - The When

If you want to go one step further, SEIPs also offer predictive features that can greatly speed up the proof of concept process. During a sprint, by collecting a series of metrics and comparing them to past performance and results, a SEIP can predict whether you will be able to complete the sprint by the given deadline. An EMP will tell you you’re late only after you’re already late.


The benefit here for testing out new products is that it can give you far quicker insights into your experimentations. It could allow you to test multiple AI tools at once, or to cycle more quickly between AI tools to find the most effective one in a reduced amount of time.


Most tech companies are moving towards adopting AI, but the current software they’re using to monitor performance simply isn’t inquisitive enough to give managers a real view of what’s working and what’s not. A new generation of SEIPs will help bridge that knowledge gap.