Countless hours have been spent by teams all over the world debating over this. But really, should you?
Nobody can answer this for you, but let’s raise the question anyway and prepare for a bunch of contentious comments. So what are the options for estimation we have in the first place?
Instead of estimating bugs beforehand, some teams just set aside some time each sprint / day / week / month for bug fixing. It doesn’t mean though that they never estimate bugs. If after the initial investigation it turns out to be a bigger fix, or it requires a change to the behaviour of the product, they would still estimate it, but most likely treat it almost like a feature, that might, if needed, go through the complete process of specification, design, development, testing, release.
Another way to approach bug estimation is to use some placeholder, like 0.5–1 days, for every bug. That idea actually comes from Jeff Sutherland, one of the creators of Scrum. These placeholders can be rather precise as most bugs don’t take more than a day to fix — Steve McConnell in his (by now rather old) book Code Complete mentions that 85% of bugs take less than a few hours to fix.
We’ve also seen certain teams take it to an extreme and treat all tickets this way. The reasoning behind it is that over a long enough period of time not only do the things average out, but also the people learn to work better together and the tickets created become roughly the same size.
Irregardless of whether you want to use this technique just for bugs or features as well, you can start with 0.5 day placeholder and adjust it as you get more data on how long it actually takes developers to resolve issues. Retrospectives are a good way to keep track of that.
If you want to get one step further and have more granular estimations, you can divide bugs by type, i.e. coding bug, design bug, database, business logic, state machine, etc. That way you would be able to have a different placeholder per type and iterate them individually.
The more data you have, the better estimations you would get, which brings us to the next approach.
If you have enough data, cool things can be done with estimations — allowing for much more granularity than splitting issues by type and assigning individual placeholder, like we’ve seen in the previous example. You could create a much more contextual system, that would use historical data from your issue tracker (Jira, Trello, etc.) to predict how long a certain fix would take using Machine Learning, Natural Language Processing and other approaches.
In fact, a group of researches all the way back in 2007 did just that. They came up with a system that automatically predicts the fixing effort. Given a sufficient number of issue reports, their automatic predictions for bugs were off by only one hour, beating human predictions by a factor of four. You can read more about it here.
The last school of thought we will touch upon believes that since you can’t estimate how long it will take to fix a bug until you’ve located the underlying problem, and most of the time is spent on precisely that, trying to come up with an estimate is pointless.
There is no “one-size-fits-all” solution here unfortunately, which won’t stop us from trying to give an answer to the question.
There are many things that should influence your decision on whether to estimate bugs and if so, which approach to use. To give an example: A bank would handle things differently than a social network, culture and motivation in one company or country may influence the decision, even the stakeholders matter — if you have a project manager who cannot live without estimations, it might make sense to just go along with it and have a good team atmosphere rather than continuously fight over estimations.
The biggest influence on whether you should estimate bugs or not, is the size of your team and company, as these two factors influence a lot of other things, so let’s take a look at a couple of different setups and what you might want to do in each case.
You are building a small product or just a personal project. There are no dependencies on other people, precise planning is not really necessary because you know how fast you can progress and what you need to work on next.
Solution: It’s likely that you are not doing formal estimations for features, so no need to estimate bugs either. Just focus on output and getting that code out there!
You have a 5–10 person development team and no other engineering teams depend on it. You are planning sprints and tracking velocity to roughly understand how good your progress is. Some bugs are caught by people internally, some — reported by users.
Solution: In order to plan your sprint capacity better, you should give default estimations of half a day for bugs (unless you have more information about a specific defect and can estimate more precisely), but also — don’t plan 100% of capacity for the sprint, leave some buffer — it will allow you to a) be prepared for some bug fixes to take longer and more importantly b) be always ready to quickly fix an important unforeseen bug without sacrificing sprint goals.
We have built a few early-stage products before, and over time came to this setup. When the product is new, you don’t have that much historical data to base your decisions on, don’t really have a proper QA team, so you need to be nimble and ready to react to immediate needs while still maintaining direction and communicating your goals and progress to other teams (sales, marketing, etc.). This setup also helps with building good customer relationships, as users are always impressed when you are able to fix something soon after their report.
The longer you work on the product, the better you are able to estimate, of course. Especially if you split bugs by type, i.e. backend, frontend, etc., so the default estimations keep getting better.
You have a few development teams and at least some of them depend on each other. You have manual QA and QA engineers, but very limited ability to talk directly with a user. Lots of stakeholders and multiple layers of management exist, to whom you need to communicate your goals, so you really need to make sure you deliver on your promises. In general, you care much more about reliability of the software and predictability of development output rather than speed and agility.
Solution: Estimate all the bugs you can. Start with at least an educated guess, but try to use historical data for better predictions. In a large organisations an effort to build such a prediction model will not be in vain — other teams will use it and, if it works well, it can save time during planning meetings, but also make the velocity much more predictable, which your management will appreciate.
- In many cases at least one engineer knows exactly what the source of the bug is, and exactly how to fix it. There is no reason not to estimate those. On the other hand, some bugs are totally obscure and unpredictable. It would be unwise to pretend that we know how long these would take to fix. In such situations either use the default estimation, or don’t estimate at all.
- A bad estimation is in many cases better than no estimation, especially since over a long period of time the average of the estimations can be rather close to the average time it actually took to fix a bug. Don’t fear being wrong, just make sure that everybody in the team understands that estimation is not a deadline. It’s applicable to features, but even more so to bugs.
- It’s ok if you sometimes overestimate and sometimes underestimate, but you should adjust something if you underestimate all the time.
- There is unfortunately no silver bullet. What we described above just happened to be a good setup for us, but each team will need to find an approach that works best in their particular situation. It doesn’t have to be one of the solutions described above though. You should never stick to all of the rules of a given framework — it’s always better to find a combination of techniques that works best for you, or even come up with something entirely different.
Create your free account to unlock your custom reading experience.