paint-brush
5 Metrics to Track when Refactoring your Codebaseby@alexomeyer
576 reads
576 reads

5 Metrics to Track when Refactoring your Codebase

by Alex OmeyerJanuary 20th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Code refactoring provides a well-needed mental break for developers, and I think many devs can relate to this. Writing code all day is very demanding, especially if you create new functionality day by day. It’s a taxing exercise, and developers often need some space to think about the codebase's overall organization and look back on what can be improved.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - 5 Metrics to Track when Refactoring your Codebase
Alex Omeyer HackerNoon profile picture

Code refactoring provides a well-needed mental break for developers, and I think many devs can relate to this. Writing code all day is very demanding, especially if you create new functionality day by day. It’s a taxing exercise, and developers often need some space to think about the codebase's overall organization and look back on what can be improved.

This is exactly what code refactoring does. It provides this well-needed mental break for developers, and it gives them the chance to tackle higher-level, codebase-related issues. However, it’s also a great moment to check your code against the codebase guidelines. No codebase is perfect. Small faults against the guidelines make it to the “main” branch.

On top of that, it reduces technical debt. Developers get the chance to explore the codebase and learn about other parts or features they haven’t touched. It’s an excellent side-benefit of code refactoring.

Yet, many teams struggle to measure the effectiveness of their refactoring efforts. Team leads or a CTO often have to report to management. Without “hard” numbers, it’s not easy to justify the time spent refactoring a codebase versus time spent developing new features. For many startups, the pressure is always on. This constant pressure makes it hard to justify codebase refactoring.

This article looks at different metrics you can use to measure code refactoring success.

Metrics to Measure Refactoring Success

There are several metrics you can implement to measure codebase refactoring success. You don’t need to implement all metrics because code refactoring is still a logical effort. However, specific metrics can give you a good indication of the success of your refactoring.

1. Number of Open Codebase Issues

When the number of open codebase issues keeps growing, it’s a red flag for declining codebase quality. However, besides being a red flag, it can also help you track your refactoring efforts' success. Your obvious goal is to reduce the number of open codebase issues to zero before starting the next sprint.

You want to fix those issues as soon as possible because they are still relatively small issues. However, when your codebase progresses over time, minor issues can become complex and time-consuming issues to solve. You want to avoid this situation as it will set you further back - timewise and cost-wise.

If you’re wondering how to track and gain visibility on your codebase issues—try out Stepsize VSCode and JetBrains extensions. The tool will help you:

Link your issues tool to codeShare knowledge within the team about ticking bombs in the codebasePrioritise issues and track progress.

2. Number of TODOs in Codebase

Generally, you don’t want to see too many TODOs sitting idle in your codebase. It’s a sign things need improvement but aren’t actively addressed. When your codebase evolves, the context of TODOs might get lost, making it hard to understand the original problem and, more importantly, solve the TODO.

Therefore, use a refactoring moment to remove idle TODOs from your codebase or organise your TODOs in a way you can see the context and resolve them quicker later.

3. Number of Failed Unit Tests

Many codebases suffer from failing unit tests.  As long as the percentage of failed unit tests stays relatively low, this won't threaten your codebase's quality. 

Though, keep an eye out on the number of failed tests. It’s a red flag for starting a code refactoring week and also measuring the success of your code refactoring. You want to reduce the number of failed tests as close to zero before starting a new sprint.

4. Code Coverage

The code coverage metric is closely linked to measuring the number of failed unit tests. However, to measure the success of your code refactoring, you also want to know how the quality of your codebase increases. One way to measure the quality of your codebase is by measuring the code coverage, as it tells you how much you can trust your codebase. If tests are written correctly, a higher code coverage often results in higher code quality.

No codebase is perfect, yet, try to get close to that 100% code coverage mark. If you don't have the resources to fully cover your codebase with tests, make sure to cover the most critical paths throughout your code. This will help you to increase trust in your code.

Common pitfall: Don’t forget to write, update, or change tests when refactoring code. You don’t want to improve your code but decrease the code coverage. In other words, refactoring and code testing go hand in hand.

5. Measure Duplication

Measuring duplication is not a high-level metric you should aim for. However, it’s still a valuable metric to watch for when refactoring code. When codebases grow, people don't always fully understand the codebase. Therefore, they might not know a helper library or function already exists and create a new one with the same functionality. The same often happens for modules in larger codebases. 

First, try to identify duplicate code and write down the locations to measure refactoring success.  You can revisit this list to measure how many lines of duplicated code have been removed from the codebase when you've finished code refactoring.

How to Use These Metrics to Improve Your Refactoring Process?

If you consider codebase refactoring, don’t jump straight in without having a plan. You need to set goals and boundaries for your refactoring process to make it easier to measure success.

First of all, you want to collect all codebase-related issues you want to tackle during this codebase refactoring sprint. If some tickets are missing, make sure to track them or create issues. For instance, review all TODOs you want to be removed from the codebase.

Once you’ve identified the issues you want to tackle, set out metrics that help you to track refactoring success. For example, you want to remove three duplicate modules and increase code coverage by 15%.

Now you have collected all issues and set targets, it’s time to share knowledge about the codebase and why specific issues need to be tackled. You can explain the context and impact of issues on the project if you leave them untackled. It’s also a great moment to share knowledge about new parts of the codebase, so all developers are up to speed. 

And lastly, prioritise issues that you want to tackle first. You won’t be able to finish all issues during a refactoring week or whatever timeline you’ve set. Therefore, make sure to tackle the highest impact organisational issues first before committing to less important issues.

To end your codebase refactoring, collect all metrics data and verify it against the goals you’ve set. It’s a great moment to discuss what went wrong and how you can improve your codebase refactoring process in the future. Don’t expect everything to go well. Mistakes can happen, and that’s fine.

Conclusion: Refactoring Success and Purpose?

No matter how you approach measuring refactoring success, make sure not to use these metrics to evaluate programmer performance or decide promotion or anything like that.

Refactoring code aims to solve codebase and organisational issues early on. If you leave them untreated, they might escalate into more significant problems that can require more time and resources to solve. 

This is also one of the main arguments for organising code refactoring. In startups, the pressure is always on to deliver new features. However, as a team, you sometimes have to take a step back to evaluate the code and refactor it to maintain its quality

First published here