Founder: "obomprogramador.com". Full-stack dev/ AI Egineer/ Professional Writer/ M.Sc. Rio de Janeir
How do experts prove the effectiveness of a medication or a vaccine? Testing! It is the only way. And, if you have any judgment in your mind, you would never use drugs that have not been tested correctly. So, why are you not as careful with testing software?
If you want more effective programmers, you will discover that they should not waste their time debugging, they should not introduce the bugs to start with - The Humble Programmer, E. W. Dijkstra, ACM Turing Lecture 1972.
Yes, what Dijkstra said is true; a lot has been invested in the quality of the software, such as Structured Programming, Object Orientation, Design Patterns, just to name a few.
But what about software testing?
Practices such as Segregation and Test Automation have been developed and employed over the past few decades. We can mention Test-Driven Development (TDD) and Behavior-Driven Development (BDD) as examples.
Even so, in 2021, we still see major software projects that use "ad hoc" testing practices.
(This is an anonymous meme, obtained from the link: http://www.quickmeme.com/meme/359jzz and the original image is from Disney's Starwars film.)
Why should I care?
How do you prove that the software is working? How do you know you are not introducing harmful side effects? How can you be sure that you delivered exactly what the customer ordered?
In Brazil, although we speak Portuguese, we have a saying for this:
La garantía soy yo (I am the warranty)
Your word that the software passed the tests, created and executed by you, should be enough?
Let's assume that you are a genius and that this is true. Imagine that in a few years, you are no longer on the team and maintenance is required. How will we know that it was performed correctly? How can we know that it had no side effects? Can we be sure that it did not cause an error in previous changes?
Without automated testing, it is impossible to know that. Without automated tests, we will never be sure about the test level of the software, that is, what percentage of lines of code were actually tested.
Automation and segregation can help you build better software
If you write automated tests and deliver them to the customer, he can make sure the software is working properly. And, at the end of the day, he paid for it.
Ok. We can segregate or separate the tests according to some criteria. For example, "white box" tests are used to measure the internal quality of the software, in addition to the expected results. They are very useful to know the percentage of lines of code executed, the cyclomatic complexity and several other software metrics. Unit tests are white box tests.
We have "black box" tests, in which we only want to know if the expected results were obtained from the correct entries. In this case, we don't care how the software did it. The "acceptance tests" are black box tests.
And we have the "gray box" tests, which combine the two modalities. We want to know if we have problems with data formats or interfaces. So-called "integration tests" can be considered as such.
Well, I have good and bad news ...
The good news is, yes! Black, white and gray box tests can be automated! The bad news is that you have to write their code!
And there's more! You must keep the test version with the same version of the software! If any changes are made, and it is necessary to change or write new tests, you must keep separate versions. Do you know why?
It is not enough that the software passes the tests. It must pass all previous tests, which have not been affected by this change.
The lack of regression tests is one of the causes of "software britleness", which is the growing weakness of software over the years, making maintenance very difficult and decreasing its reliability.
Finally, if a customer has a problem with a previous version of your software, as long as you still support it, you should run the tests for the corresponding version to resolve the customer's problem. You cannot simply require the customer to update the version with each new release of your software.
This is only possible with tests stored according to the software version.
Before writing or changing the software source code, write the tests! Write them based on requirements (or user stories). Unit tests should only test each unit independently of the software. It can be one or more classes, as long as they are never used separately. If a class or function can be used separately, then it will be a new unit.
You must perform unit testing every time your unit is run again. In fact, you should test your unit using the test code. In some programming languages, such as Java, for example, there is the custom of creating classes with "Main" method just for testing. Do not do it! Use unit test script.
Unit tests must be stored in your SCM - Software Configuration Management tool, along with the main software. And they must be executed at each "build".
But be aware! If you are testing a unit and have access to something external, such as a Database, then you have violated the "unit test" principle, and created an "integration test". You must simulate (or "mock") the behavior of external components and databases, as unit tests must be possible to be performed without any dependency!
The goal is to find out if the interface between the software components is working as intended.
Incorrectly formatted data, methods invoked at the wrong time, incorrect records ... All of this can be considered an "interface error" and the purpose of this type of test is to catch these errors.
You can integrate components (separate units) in several ways. One of the most common is through a database or file system. A unit expects the presence of a certain record, with a certain format, in a given situation.
Another way is by direct or indirect invocation. For example, direct calls between routines, invocations via IPC (Interprocess communication) mechanisms, or via asynchronous messaging mechanisms. In this case, a unit also expects a certain message, with a certain format at a given time.
This is what you should test in this case. Integration tests do not replace other types of tests but complement them.
These tests are designed based on requirements (or user stories) and are used to verify that, given a given input, the expected result (and only it) was produced.
There is a big difference between making the software correctly and making the right software. Unit and integration tests prove that each unit is doing what it should and that they are communicating as expected. But only acceptance tests prove that the results match what the customer ordered.
But this is obvious!
Is it really obvious? So why do I still see a huge amount of people ignoring this? Why do I see a huge amount of complex software without a single unit test line?
Cleuton Sampaio, M.Sc.
Create your free account to unlock your custom reading experience.