In this blog post, I want to discuss the importance of starting with a red test. The red-green-refactor cycle is a well-known mantra in test-driven development.
To recap:
I very rarely do post-code testing (perhaps there is a better-known expression for writing tests after writing production code). I do that when I encounter a piece of code not covered with tests that I need to put my hands on. And I'm happy when the test is green, making me feel the code works as expected. Test-driven development is not only about writing tests to make sure the production code works as expected, but it is rather (as the name suggests) a process of development. A couple of things pop into my head when I think about Red in TDD. First is demand; I produce the production code when it is needed (red test), and I need to meet the demand. Second - YAGNI (You Aren’t Gonna Need It), I do not produce the code that is not needed, I do not code too much, because it might not be needed.
I'm only human, and I make mistakes. I make mistakes in production code and test code alike. The idea for this blog post came last week when I encountered situations where the test was not red, and it was very tempting to leave it as is because it was GREEN. This was especially true when there were already several tests, leading me to think that I might have already covered the case. It may sound like an oxymoron, but a GREEN test in the first step of TDD is a RED flag. In the following sections, I will provide cases that I encountered last week where I made a mistake in the test, causing it to be green, but it was incorrect.
I'm a backend developer and use TDD a lot. However, sometimes I have to work with front-end and do JavaScript development. I do my best to practice TDD while doing JavaScript development as well.
For JavaScript testing, I use karma as a test runner and jasmine as a test framework. In jasmine, the describe
function sets a test case, and the it
function defines an individual test. I want to share how I mistyped it
as if
multiple times, leading to a false impression of all passing tests.
it('Seat map is selected when person of the seat map is selected',
function () {
let seatMaps = result.Seating.SeatMaps;
expect(seatMaps[0].Selected).toBeTrue();
seatMaps[2].Passengers[1].select();
expect(seatMaps[2].Selected).toBeTrue();
}
);
and
if('Seat map is selected when person of the seat map is selected',
function () {
let seatMaps = result.Seating.SeatMaps;
expect(seatMaps[0].Selected).toBeTrue();
seatMaps[2].Passengers[1].select();
expect(seatMaps[2].Selected).toBeTrue();
}
);
In all modern development environments and text editors for developers, if
is usually highlighted in a different color. I'm not colorblind, but it is still easy for me to miss it. Additionally, I find that my finger muscle memory is used to typing a two-letter word starting with i
as if
. It just happens automatically. As a result, there are no errors, and it is a valid sentence. All tests appear green when I run them. It is easy to notice when it is the first test because it will show that 0 tests have been run. However, it is more difficult when there are more tests. In this particular situation, my initial thought was, "aha, I already covered this functionality.". Fortunately, my experience with treating green tests as a red flag saved me.
This example is in C#. While it's not taken from production code, it captures the essence:
[DataRow(1, 2, 3)]
[DataRow(2, 3, 5)]
public void SomeFakeTest(int a, int b, int c)
{
Assert.AreEqual(c, a + b);
}
In the MsTest test framework, the DataRow
attribute alone is insufficient to mark a method as a test. The TestMethod
or DataTestMethod
attribute is required for the method to be recognized as a test. I have found myself forgetting this attribute a few times, which resulted in tests not being run and creating a false impression of all tests passing.
The following example, which I adapted from my previous blog post, demonstrates the situation. I recall the instances when I write a test - I think of a test case, a test name, arrange it, then act, and then - get distracted. When I return to see where I left off, I usually run the tests to see the red ones and continue from there. But in this particular case, all are green. This case is tricky; it's easy to forget that I left the test unfinished, and it's green because it doesn't have an assertion.ertions.
[TestCaseSource(typeof(UnexpectedPaymentMessageIsSentWhenInvoiceIsOverpaidOrUnknownCases))]
public async Task UnexpectedPaymentMessageIsSentWhenInvoiceIsOverpaidOrUnknown(
Invoice invoice,
string message)
{
_invoiceRepositoryResult = invoice;
await CallCallback();
}
Recently, I needed to modify the functionality of a DTO converter to return passengers in order by sequenceNumber
, rather than in a somewhat random manner. I'm using FluentAssertions
, which allows passing configuration in the BeEquivalentTo
method. I simply typed config.With
, and Visual Studio's IntelliSense suggested a list of options. I chose the first one that started with With
and ended with Ordering
.
seatMapDto
.Passengers
.Select(x => x.FirstName)
.Should()
.BeEquivalentTo(
new[]
{
"GUDMARIN",
"GUDMARIANA",
"TOM"
},
config => config.WithoutStrictOrdering(),
"should be ordered by sequenceNumber"
);
I ran the test, and it was green. Since I expected the test to be red, I began by making sure that the input data was unordered in the arrange part to create a nonsequential order for the test to be red. Only later did I realize that I had chosen the WithoutStrictOrdering
configuration instead of WithStrictOrdering
.
Being human and making mistakes is not easy :). In this blog post, I wanted to showcase how sometimes silly mistakes can give the impression that tests are green, and emphasize the importance of starting with a red test first.
Have you encountered similar situations where a mistake in a test led you to believe that production was working as expected? Please share your experiences in the comments.
If you enjoyed this post, please click "like" and "follow". Feel free to explore my other blog posts, where I write about test-driven development and my pet project EasyTdd, a Visual Studio extension that makes test-driven development simpler.
Also published here.