paint-brush
How Kickstarter’s Mark Wunsch Used Engineering Metrics to Make a Case for Refactoringby@codeclimate
230 reads

How Kickstarter’s Mark Wunsch Used Engineering Metrics to Make a Case for Refactoring

by Code ClimateMarch 29th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

“Effective engineering leadership is being outcome-oriented,” says Kickstarter’s Mark Wunsch.

Company Mentioned

Mention Thumbnail
featured image - How Kickstarter’s Mark Wunsch Used Engineering Metrics to Make a Case for Refactoring
Code Climate HackerNoon profile picture

“Effective engineering leadership is being outcome-oriented,” says Kickstarter’s Mark Wunsch.

When Mark first joined Kickstarter as VP of Engineering, one of his first decisions was to re-organize the engineering department, based on business-oriented objectives.

Each 3–7 person scrum team focused on a different “essence,” or a different customer-facing part of the product. This helped engineering understand their work from a business perspective. Mark tells us, “The objective is not to ask, ‘Is the code good?’ but to ask ‘Is Kickstarter good?’”

The engineers’ perspective, however, was difficult to communicate with leadership.

The engineers were constantly bogged down by problems that were invisible to non-technical colleagues, such as legacy code. Kickstarter has been around for about 10 years, so a big portion of the codebase was troublesome to work with. Mark told us, “To an engineer, it’s so obvious when a piece of code is brittle, but it’s really hard to advocate for putting engineering resources into solving technical debt.”

Mark decided to use metrics to further align engineering and leadership.

Diagnosing technical debt with data

Every developer knows that legacy code undoubtedly slows down engineering. But taking weeks away from shipping new features compromises how much new value the company is delivering to customers.

Before making a case for refactoring to leadership, Mark decided to do a deep dive into where technical debt was slowing down the team. He used engineering analytics tool Velocity to learn how each engineering team was working and where they might be getting stuck**.**

Mark started by looking at his team’s weekly throughput, as measured by merged pull requests. Whenever the throughput dipped significantly below their average, he’d know to investigate further.

Seeing a low Pull Requests/Merged at the end of the week can be a red flag a team was stuck.

Unlike subjective measures that are common on most engineering teams, like story points completed, Velocity metrics are represented by a concrete unit of work: the Pull Request. This enables Mark to objectively understand when a scrum team is really bogged down, compared to the last sprint or last month.

Once he spotted productivity anomalies, Mark would pull up a real-time report of his teams’ riskiest Pull Requests. Pull Requests that were open longer and had a lot of activity (comments and back-and-forths between author and reviewer) were at the top of the list.

An example of Velocity’s Work In Progress which shows the old and active pull requests that may be holding up the team.

Because trickier parts of the applications tend to require more substantial changes, pull requests that are most “active” often point Mark to the most troublesome areas of the codebase.

After a few weeks of investigation, Mark was able to find concrete evidence for what his intuition was telling him. “The data showed that we were consistently slowed down because of legacy code,” said Mark.

Bringing transparency to engineering practices

During meetings with the executive team, Mark could now point to weeks with less output and prove that technical debt was impeding the engineering team from their primary business objective: continuously delivering new features.

To communicate how the team was doing, he’d present a Pull Request throughput chart with a trend line:

A Velocity report showing Pull Requests Merged/Day, over the last 3 months.

This helped leadership visualize how much Kickstarter was growing in their engineering efficiency but also opportunities for further improvement.

Mark also shared Cycle Time (i.e., how quickly code goes from a developer’s laptop to being merged into master.)

A Velocity report that shows the fluctuation of Cycle time, over the last 3 months.

Cycle time was a great indicator for how much trouble it was to make a change to the codebase. High cycle time would often correspond to low output a day or two later, showing that some form of obstruction existed for a developer or team.

These two charts, along with Mark’s summary of his more technical findings, aligned all of leadership around scaling back on new features temporarily and dedicate more time to refactoring.

Bridging engineering and leadership

After spending time investigating what legacy code was slowing down the team, Mark was able to take a strategic approach to how they tackled technical debt.

Rather than jump on the opportunity to fix anything that looked broken, he could have teams focus on the biggest productivity blockers first. Engineers were happy because they had the time to rework the legacy code that was constantly slowing them down. Leadership was happy when they could see long-term improvements in engineering speed. Six months after refactoring, Kickstarter saw a 17% increase in Pull Requests merged and a 63% decrease in Cycle Time. It was a win-win-win.

Mark tells us, “Being able to talk about technical debt in the same way we talk about business metrics is incredibly powerful.”

If you want to learn exactly how much technical debt is slowing down your own engineering team, try Velocity free for 14 days.

Originally published on codeclimate.com