For the last few of months I started digging into the software testing world. I really wanted to learn more about how to create more effective tests, refactor code with more confidence and feel safe about adding new features. However, I felt it is little difficult to dive right in this area, that in my point of view, is underestimated.
After some good readings and studies, I decided to start applying this new concepts in my professional projects. The intent of this article is to provide some insights, external material and advice based on what I studied and applied at work so far.
Everybody loves Test-Driven Development, David Heinemeier Hansson, right? Just kidding. The fact is that it is very difficult to talk about testing without mentioning TDD. That’s why I gave a try and implemented a new feature and a huge refactor in my professional projects using it.
I really liked developing using TDD and the results were very satisfactory: the deadline has been met, it works really great with pair programming and no reworking so far. And I felt a lot about the confidence that Kent Beck talks so much about.
However, TDD is not the only way to apply tests in your projects. Don’t feel pressured to apply TDD in your work environment just because “that’s what the cool kids are doing”. There is no such thing. The most important part is to add meaningful tests in your project. Don’t worry if you add them before, during or after the code development.
Regarding this topic, I strongly recommend the series of conversations between Kent Beck, Martin Fowler and David Heinemeier Hansson about TDD. It is a very insightful discussion that will probably make you rethink what you know about it.
We are developers, we love dichotomies. If it’s not
true
, then it is false
; if it is not 1
, then it is 0
. That’s how we are. Well, testing is not exactly this way. To start, the term Unit Test varies a lot between authors. When Martin Fowler asked about the definition of Unit Test to Kent Beck, he replied that he covers 24 different ones during his training course.Despite having a lot of definitions, the Unit Test have three distinct elements, according to Martin Fowler:
The test suite speed is also a common point of disagreement. Some developers praise for more authenticity in their tests, but compromising a little the speed; others praise for a very fast feedback loop.
The lines that separate the different test types are also a little fuzzy: when adding an actual database running in your tests, even if it runs very fast, does it continue to be a unit test? When developing using TDD, the tests are black box or white box? Are you a tester or a developer? Kent Beck has a great article about how these dichotomies are volatile in test environment.
There a lot of fight about terminology and where each type of test ends. My advice is focus in creating meaningful tests, tests that adds value to your project, tests that you can rely on. Don’t worry so much if your tests take 1 minute or 1 second, or where it fits in the test pyramid.
As I mentioned earlier, there are several concepts of Unit Test in different sources. One problem is that this can lead us to creating a Unit Test for every new class or function you add in the project. That’s not an issue until you need to refactor the code and finds out that your tests needs to be refactored as well.
There is an excellent talk from Ian Cooper about the misinterpretations regarding Test-Driven Development. Even if the video is focused in TDD, I recommend watching it because some of the concepts are applicable for other test techniques. Some of the points that helped me a lot writing better tests are:
Do not write test for implementation details
Implementation details changes a lot. They can be refactored, removed, moved etc. If you base your tests in statements like “verify that this function was called”, it’s likely that as soon as this part of the production code is updated, your test will break.
A new class or function should not trigger a new test
It’s not because a new class or function is created that a test to cover it should be created right away. Specially if this new class or function is internal (not visible by another scope of your software or by a client). Creating new tests is great to help build reliable software but creating unnecessary ones makes the code too rigid and awful to refactor. Always ask yourself before adding a new test and don’t be afraid to remove if it is not meaningful in your project.
A new behavior should trigger a new test
If your software receives a new requirement, it should trigger a new test to cover this new requirement. Try to focus your tests in how your application should behave. For example, if application is a to-do list probably the requirements are “create a new to-do”, “update a existing to-do”, “create an alarm for a to-do” etc. That’s the behaviors you should cover in your tests.
Do not test internals
The internals of your software (private/protected/internal classes and functions) concerns only the implementation details. These are the ones very likely to be updated during a refactor and you should not test them. Instead, you should have a thin layer (API) that is testable. With this API layer, it’s is possible to test the input and output of the behavior without testing every single internal of your software.
Mock is a powerful tool for creating doubles in your tests, but it comes with a cost. Usually, the test using a mock needs to knows some of the detail implementation of the System Under Test (SUT). This can be an issue when refactoring your code.
Let’s imagine a very simple scenario: We are developing a feature that selects all the users that have birthday in the current month. We may have basically four classes:
Once our SUT is the use case class we decided to mock the user repository, filter and calendar provider classes. The first problem here is that the test class needs to know which function inside each class needs to be mocked. And we have a bigger problem now: if any of these classes structure changes, we break our tests. One of the major advantages of having tests in our code base is to ensure that refactoring does not break the working code. But if our test breaks so easily when we simply move an internal class or rename a function, how can we trust them?
During one of the :
Do you mock absolutely everything? My personal practice is: I mock almost nothing. If I can’t figure out how to test efficiently with the real stuff, I find another way of creating a feedback loop for myself.
Which leads me to the next point.
We have other types of doubles for using in our tests. In the clarifying article “Mocks Aren’t Stubs”, Martin Fowler mentions the Gerard Meszaros definitions for each double: Dummy, Fake, Stub, Spy and Mocks.
You can find more information about their definitions in the links above, but the point is: you don’t need to mock everything in your code. Actually, I agree that you can simplify the double definitions. In my personal experience, I try to use the real implementation whenever I can. If it is not possible, I create a fake representation and my final attempt is to mock it.
While developing using TDD, I found myself questioning the software design more often than usual. Simple questions like “how can I test it?” or “if we invert this dependency, will be easier to test it?” help a lot creating a better design. Of course these questions may appear during a development not focused in testing, but in my experience they appear much faster when you put light on it.
In the feature development mentioned earlier, we decided to refactor a simple utility class that is extensively used in that scope. The new version of it, developed with testing in mind, created a more flexible, reusable and API-like class in our project.
Concepts such as Clean Architecture and SOLID principles plays really well with testing in mind. It is very difficult to test a code base that does not have a good design: it is hard to replace real implementation for test doubles and probably you will have to rely more in UI tests than Unit tests.
Testing is hard and it is not a technique that you will master after your first attempt. Give it a try, learn, fail and retry.
My feeling is that software test is very underestimated. It is very common to find articles about how-to, new frameworks, new techniques but is not easy to find ones focused on testing. In fact, I worked in several projects where the Software Testing Ice-Cream Cone was the (anti) pattern and there were no attempts to change this scenario.
Adding meaningful tests to your code base will make your project more reliable and easier to refactor and introduce new features. Also it will make you grow as developer, giving more tools for better coding.
I believe I neglected tests during my first years in software development because we don’t talk enough about this topic. But I also believe that a meaningful test is better than having no tests at all, even if you are not sure about the technique. It all starts with one little green bar in your project (or red if you are using TDD).
In this article, there are several external resources with more information about each topic mentioned. To facilitate the access, all the links are available below:
Besides all the external links, I strongly recommend the book Test Driven Development: By Example by Kent Beck and Clean Architecture by Robert Martin.
Thanks a lot for reading my article.
Let’s create better, more reliable and testable code together! 😊