It was late 2008 and I was writing an extension for the Windows 7 taskbar. My work was essentially finished, the final application was very small, only a few screens of code. I was instructed to write unit tests for it just so managers could check a box; I resisted because the project was too small to have any units and tearing it apart to add these silly tests would mean destabilizing work that had already passed all tests and was ready to release.
As the conversation deteriorated my manager told me about something new called “Test Driven Development” at which point I made sure there was a clear path to the exit because I was clearly talking to a lunatic. This was the first time I heard the catechism
when the tests all pass, you’re done
Every TDD advocate I have ever met has repeated this verbatim, with the same hollow-eyed conviction.
With over 20 years experience at the time I recognized the first obvious flaw; writing tests prior to coding is mindful of the old adage about no battle plan surviving contact with the enemy. Most places I’ve worked regard software planning as wasted time; at best they go through the motions.
At Microsoft I was penalized for thoroughness several times, especially writing threat models. However I always plan my work, starting with requirements and functional specifications, and I have turned down potential clients who didn’t want to take the time to do a design prior to coding.
But even the most thorough plans reveal unanticipated contingencies within days, if not within hours, of initiating implementation. Clients’ designs are always more vague than they realize and even when I am as diligent as I know how to be there will be things that I didn’t think of either.
So were I to follow the TDD architecture I would have to constantly revisit my tests, review all of them, revising them to match my discoveries. And since writing the tests when the work is completed or nearing completion will include all I have discovered during implementation, writing tests afterward makes much more sense.
Sure is fun rewriting these over and over - Photo by Elnur Amikishiyev
Here is what TDD looks like to me:
Assuming, that is, the QA department hasn’t all been laid off because TDD.
Note that (4) may happen dozens of times in the course of a large project, and that every single revisit of the TDD tests is 100% wasted time.
The old way is better
With this approach I write the tests after the odyssey of discovery, so the tests are only written to the final design, and are only revisited in response to
Best-laid plans of mice and men, etc. Because we all have …
The second problem here is that TDD presumes that developers should write their own tests. This is supremely ridiculous. I’ve seen this many, many times; the project appears solid to me, I can't break it, but someone else can break it in less than a minute. Why?
Because the same blind spots that I had in design will appear in my tests.
My mind just reels here. How can anyone have written anything more complicated than a Yes-No message box and not run into this? There is nothing about writing tests prior to implementation that addresses this. Whether I write the tests before or after implementation there are cases I didn’t think of, and they need not be “edge cases.” Everyone’s work needs to be blackbox-tested by someone else testing against the requirements and functional specifications.
It would be easy to take the attitude that the time wasted revisiting tests in the TDD scenario is simply more income; so the project takes 20–50% longer, so what? I get paid 20–50% more. But I don’t regard that as ethical. And frankly, it’s boring.
Read any advocacy of TDD and it will always boil down to an argument for testing itself, which nobody argues against. It never makes a case for writing tests before implementation.
There is no rationale for writing tests before implementation. That is absurd.
Because it is backwards.