paint-brush
Test-Driven Development is Fundamentally Wrongby@Cheopys
36,971 reads
36,971 reads

Test-Driven Development is Fundamentally Wrong

by Chris FoxNovember 3rd, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Test-Driven Development is Fundamentally Wrong, says Chris Fox. He says TDD presumes that developers should write their own tests. The old way is better: Write the TDD tests prior to implementation, write the tests after the odyssey of discovery, so the tests are only written to the final design, and are only revisited in response to bugs found in QA. Cheopys Chris Fox: "It would be easy to take the attitude that the attitude of testing itself, which nobody argues against itself, makes nobody argue against it, which never makes it worthwhile"

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Test-Driven Development is Fundamentally Wrong
Chris Fox HackerNoon profile picture

It sounds backwards because it really is

First Mention

It was late 2008 and I was writing an extension for the Windows 7 taskbar. My work was essentially finished, the final application was very small, only a few screens of code. I was instructed to write unit tests for it just so managers could check a box; I resisted because the project was too small to have any units and tearing it apart to add these silly tests would mean destabilizing work that had already passed all tests and was ready to release.

As the conversation deteriorated my manager told me about something new called “Test Driven Development” at which point I made sure there was a clear path to the exit because I was clearly talking to a lunatic. This was the first time I heard the catechism

when the tests all pass, you’re done

Every TDD advocate I have ever met has repeated this verbatim, with the same hollow-eyed conviction.

The Design Always Changes

With over 20 years experience at the time I recognized the first obvious flaw; writing tests prior to coding is mindful of the old adage about no battle plan surviving contact with the enemy. Most places I’ve worked regard software planning as wasted time; at best they go through the motions.

At Microsoft I was penalized for thoroughness several times, especially writing threat models. However I always plan my work, starting with requirements and functional specifications, and I have turned down potential clients who didn’t want to take the time to do a design prior to coding.

But even the most thorough plans reveal unanticipated contingencies within days, if not within hours, of initiating implementation. Clients’ designs are always more vague than they realize and even when I am as diligent as I know how to be there will be things that I didn’t think of either.

So were I to follow the TDD architecture I would have to constantly revisit my tests, review all of them, revising them to match my discoveries. And since writing the tests when the work is completed or nearing completion will include all I have discovered during implementation, writing tests afterward makes much more sense.

Sure is fun rewriting these over and over - Photo by Elnur Amikishiyev

Here is what TDD looks like to me:

  1. write the TDD tests
  2. begin implementation
  3. discover an unanticipated consideration
  4. rewrite the tests
  5. continue implementation
  6. goto 3 over and over and over …
  7. (actually more like item 150) all tests pass
  8. send to QA

Assuming, that is, the QA department hasn’t all been laid off because TDD.

Note that (4) may happen dozens of times in the course of a large project, and that every single revisit of the TDD tests is 100% wasted time.

The old way is better

  1. begin implementation
  2. discover unanticipated considerations and fix them
  3. at code-complete write the tests and run them
  4. fix any bugs
  5. send to QA

With this approach I write the tests after the odyssey of discovery, so the tests are only written to the final design, and are only revisited in response to

  • bugs found in QA
  • new features
  • bugs discovered post-release

Best-laid plans of mice and men, etc. Because we all have …

Blind Spots

The second problem here is that TDD presumes that developers should write their own tests. This is supremely ridiculous. I’ve seen this many, many times; the project appears solid to me, I can't break it, but someone else can break it in less than a minute. Why?

Because the same blind spots that I had in design will appear in my tests.

My mind just reels here. How can anyone have written anything more complicated than a Yes-No message box and not run into this? There is nothing about writing tests prior to implementation that addresses this. Whether I write the tests before or after implementation there are cases I didn’t think of, and they need not be “edge cases.” Everyone’s work needs to be blackbox-tested by someone else testing against the requirements and functional specifications.

Discussion

It would be easy to take the attitude that the time wasted revisiting tests in the TDD scenario is simply more income; so the project takes 20–50% longer, so what? I get paid 20–50% more. But I don’t regard that as ethical. And frankly, it’s boring.

Read any advocacy of TDD and it will always boil down to an argument for testing itself, which nobody argues against. It never makes a case for writing tests before implementation.

There is no rationale for writing tests before implementation. That is absurd.

Because it is backwards.