I like to think of software development as navigating through a dark, perilous wilderness. You have no map. If there is a map, it is because someone has already made your idea, so there is no map. The terrain is always uncharted, and always dangerous. You are beset on all sides by total darkness, a black fog-of-war. The only way to advance is to stumble, blindly. You know the general direction you want to move in — heck, you might even be able to see it like a distant mountain on the horizon — but you have no idea what lurks in the darkness between you and your goal.
Even if we completely ignore the “human” element at play in the development cycle, the notion that we could come up with accurate software development time estimations is the quintessential unrealistic expectation — a fool’s errand. Its intractability comes not from incompetency or from a lack of discipline, but from a deep-seated, fundamental limitation imposed on our reality and codified in the proof of the Church-Turing thesis:
Given the initial conditions (inputs) and source code of an arbitrary program, and given access to unlimited processing and memory resources, it is impossible to always predict whether such a program will run forever or eventually halt its execution after some amount of runtime.
While there are some obvious exceptions in terms of formal methods and programs that are provably correct, the chances are that you haven’t had the time or found it practical to implement formal methods and mathematically prove that your program does exactly what it was designed to do in every possible execution state. Even if you have had the time to do this, and you have done it well, up until that very last moment where your program became 100% formally verified, you were wandering around in the dark, reasoning about something that will either run forever without exhibiting any flaws, or will eventually exhibit one or more flaws. In software development, this chaos is normalcy, and wandering around in this unknown, unproven territory where many things could go wrong at any moment simply is the perpetual state of programming.
If this is the case, then writing and maintaining code can be seen as a fundamentally chaotic activity, subject to sudden, unpredictable gotchas that take up an inordinate amount of time.
This, I think, is very consistent with the anecdotes of other developers and project managers I have spoken with, and my own experiences — for any given project, the most time-consuming cards in that project were either vastly underestimated or not anticipated at all when the project was first planned and subjected to time estimation. While it is true, especially with large teams of developers, that you can come up with a decent statistical model for typical behavior, this is not useful, as it is overwhelmingly the unpredictable outliers, rather than the known demons, that will dictate how many weeks, months, or years deadlines get pushed back. Add users, multiple developers, human errors, managerial issues, and shifting markets/requirements to the mix, and this unpredictability only increases.
A brave project manager tackles time estimations at the beginning of a sprint.
If we accept the premise that software development time estimation is intractable, quixotic, inaccurate, and a waste of time, then an immediate question follows: why does conventional wisdom in the industry regularly point to time estimation as a necessary part of the development cycle?
There are a number of reasons why perfectly reasonable people might subject themselves to the irrational constraints brought on by development time estimations. First off, it is a fad right now, so there’s that. Another reason is that business requirements, and the business/sales-types who make decisions based on their perception of these requirements like to make promises about the future (especially to investors and potential users), which inevitably involves becoming entangled in some set of unachievable development goals and/or deadlines that when unmet, reflect poorly on the organization as a whole, often more so than if no time-sensitive promises had been made at all. This is especially true at companies where non-technical individuals hold a disproportionate amount of power or influence over the development process, and perhaps don’t fully understand that software development is a fundamentally more difficult, unpredictable activity than are other activities you encounter in the business world.
Yet another reason is a sort of emperor-with-no-clothes effect — all developers know secretly, I think, that time estimations are utter bullshit, but they see everyone else making estimations, touting them, using them for their projects, and they assume that they themselves are at fault, that they simply aren’t as good as their peers at prophesying the exact time of completion for every task they undertake. In reality, these developers are always significantly low-balling or high-balling their guesses, and when they are exactly right, this is simply luck. At the end of the day, even the simplest, easiest tasks may contain latent, unpredictable gotchas that will take up weeks of development time.
Development time estimations lead us to cut corners, to design painfully simple software that lacks the architectural complexity it needs to one day tackle the fully-realized version of the problem we want to solve. Since business requirements inevitably undercut even the most brazenly optimistic development timelines, the things we would do given even a little bit more time end up not getting done or even planned. I like to call this complexity debt (I prefer this term to the more common “technical debt”, which I think carries the wrong connotation in that it has little to do with the technical ability of the development team). If you find that you are constantly extending a once-basic interface or data model to handle progressively more complex features and use cases, or find yourself wishing that you had a fundamentally different data model, then you are a victim of complexity debt. Given a more relaxed development schedule, you might already be done, because from the beginning your team would have implemented the “proper”, more complex architecture capable of dealing with the intricacies and entanglements that you must now deal with in an ad-hoc, hacked-together fashion.
There is a world where building software is not like wandering through a vast unknown wilderness, but is instead more like a short stroll through a field of daises — but it is a doomed world. I would argue that projects like this, where the estimations are consistently right, and it feels like there is an actual end to development in sight, do in fact exist, but that these aspects are symptomatic of something worrisome and broken on a fundamental level. If something is inherently broken about your product — whether it be in the competency, makeup, or attitude of the engineering team or management, in the technical difficulty (or lack thereof!) of the idea, in a general disconnect between actual and imagined user needs, or by simply not having a tappable market — in any broken situation, systematizing things, whether you use waterfall, scrum, or pure kanban, will only give you one thing: a broken system, in other words, a dead or dying product. If it feels easy, and it feels attainable in a few short sprints, then I would say you are either wrong, or in grave danger of developing something that isn’t ultimately going to work out. If you have a good idea, it should be difficult to develop, and it should feel like exploring new, unknown territory.
In general, you should consider using pure Kanban without time estimations, but first, you should take the time to write out the following sentence, maybe hang it on your office wall, add it to your mission statement, set it as your development Slack room’s status, and tattoo it to your CTO’s forehead:
As a company policy, we do not provide or deal in software development time estimations of any kind, and we view such a practice as harmful to the overall development process, and to the industry as a whole.
The key is not to provide these types of estimations in the first place. Good investors don’t need to know when a feature will be done — rather they just want to know that it is a priority and that it is being worked on — and if they are smart, they will respect you when you highlight the reasons why providing concrete estimations will ultimately decrease the value of their investment (e.g. complexity debt, etc.). You can do all the things you normally do — generate marketing hype, promote your product, tout in-development features — without externalizing specific completion deadlines to the public or even to non-technical management, and in fact, this is what many successful software companies already do.
This is not to say that you should completely turn a blind eye to the supposed relative difficulty of various features and bug-fixes — the point is really just to be more up front with everyone about the fact that estimations are just that: fantastical theories, that are subject to potentially dramatic changes as more of the surrounding landscape comes into view, and are in no way a commitment to a particular completion date. This is why tentative difficulty estimations, completely devoid of any time commitment whatsoever, are the way to go. You can use such estimations as pointers to identify which parts of a project should be tackled next. That said, difficulty estimations should not be used in isolation, but should instead be used with a combination of other factors including:
If you take anything away from this article, at least consider the harm that development time estimations may already be doing to you, your friends, your family, and especially your organization, and consider doing away with time-based estimations, now, for good.