paint-brush
Too much testing is bureaucracyby@jonathangrosdubois

Too much testing is bureaucracy

by Jonathan Gros-DuboisNovember 1st, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

One of the first things that we should recognise about automated testing is that it is a risk-aversion strategy — by that definition, we can infer that the more tests a system has, the less likely it is to behave unexpectedly—This is generally a good thing, however, the downside is that too much testing can lead to significantly reduced speed and agility and can turn a software project into a technological bureaucracy.

Company Mentioned

Mention Thumbnail
featured image - Too much testing is bureaucracy
Jonathan Gros-Dubois HackerNoon profile picture

One of the first things that we should recognise about automated testing is that it is a risk-aversion strategy — by that definition, we can infer that the more tests a system has, the less likely it is to behave unexpectedly—This is generally a good thing, however, the downside is that too much testing can lead to significantly reduced speed and agility and can turn a software project into a technological bureaucracy.

The amount of testing that a company does usually corresponds to the size of the company. Big companies tend to be risk-averse, so developers who work for such companies are typically expected to write tests (both unit and integration tests) to cover most of the system’s logic; especially on the back end. Practices such as test-first TDD and 100% code coverage for both unit and integration tests are commonplace.

The amount of testing also often depends on the industry that the company is in and on the importance of the project within the company (though a lot of companies tend to blindly enforce the same blanket testing practices across all projects). Companies and projects which directly handle money or valuable digital assets are typically the most risk-averse; it’s not unusual for engineers on these projects to spend more time writing tests than writing actual business logic. Some companies have a policy which requires 100% unit test coverage and 100% integration test coverage for any piece of logic that gets merged into the main development branch across all projects. The amount of testing happening in these companies is bordering on insanity and it costs these companies a lot in terms of productivity and agility.

Employee churn is another factor which is strongly correlated with risk aversion; if you can’t expect employees to stick around long enough to support all the new code which they wrote, at least tests will make it easier for new employees to understand the existing code and to pick up where the previous employee left off (with minimal risk of damaging the existing code)… Ironically however, projects that have an overly large amount of test logic and scaffolding tend to be less interesting for engineers to work on; this typically increases employee churn; it’s a vicious cycle which can only be kept in check through the use of increasingly large paychecks.

A couple of years ago, I worked on a high-stakes project for a big company that had so much testing and scaffolding logic around it that it could take days to make the most trivial changes like renaming certain fields in an API response. While this level of testing might have made sense for that specific project under different circumstances; because it was still pre-production (and still prone to changing requirements), I would argue that the test suite was not worth its cost at that particular time. I think that during the very early stages of most projects, overly detailed unit tests should be seen as a form of premature optimization.

Tests lock down APIs and make it harder for developers to change those APIs as requirements change.

When you have thousands of lines of unit and integration tests around a specific class/component, as an engineer, you have a strong incentive to put off refactoring that class/component (often, well past its use-by date); this is because refactoring tests can require a lot of effort and the process is relatively boring/tedious. This misalignment of incentives can cause technical debt to build up at a faster rate.

Locking down the code base is generally a good thing if the system is already in production and has several external dependencies, but it can be really bad for systems which haven’t launched yet (or haven’t yet found a market fit)… I think that it’s better to start with simple integration tests and then maybe do unit tests later once the project’s code has settled into a good production state.

When a system hasn’t launched yet, you want developers to be able to experiment as much as possible. It’s practically impossible to design a perfect API from scratch. As Facebook would say, sometimes it’s better to “Move fast and break things”.