Improving Testing Algorithms: Mathematical Approaches in Software Testing

Written by shad0wpuppet | Published 2024/01/24
Tech Story Tags: software-qa | qa | software-testing | software-quality-assurance | math | software-quality | algorithms | testing-algorithms

TLDRThe article explores testing methodologies, emphasizing the role of mathematical models in optimizing code coverage. It discusses minimizing logical expressions, optimizing pairwise testing, and testing changing system states using algorithms. Key conclusions highlight the efficacy of these methods in achieving maximum test coverage with minimal effort. There are challenges and insights in adapting these algorithms to different systems. It's important to understand the theoretical foundations for effective testing.via the TL;DR App

New test design methodologies do not always emerge simultaneously. A significant portion of modern testing practices has evolved through meticulous theoretical and experimental work in adapting mathematical models. Although it is not necessary to be a mathematician to be a good tester, understanding the theoretical foundation behind testing methods can be beneficial.

Maximizing Coverage and Minimizing the Number of Test Cases

Mathematical logic can be applied to optimize code coverage for a system. Let's consider a simple example with an "if" statement containing two branches and a lengthy logical formula in the condition:

if ( (& a2) & (!(a2 || a4) ) || a3 ) {
   # action 1
} else {
   # action 2
}

To cover both branches, it is necessary to understand the structure of the formula. One might wonder, what can be done? You can always test this piece of code (logical formula) exhaustively, resulting in 16 tests. However, this is quite a lot, and efforts should be made to reduce the number of tests. The number of tests can be reduced using the Modified Condition/Decision Coverage (MC/DC) method (yielding 11-12 tests). If branch coverage is sufficient for testing risks, only two tests are needed, but it's unclear which ones.

To solve this problem, boolean algebra can be applied to the logical formula:

if( (& a2) & (! (a2 || a4) ) || a3 ) =
= ( (& a2) & ( (!a2 || !a4) ) || a3 ) =
= ( a1 & a2 & !a2 & !a4 || a3 ) =
= 0 || a3 = a3

After transforming the original formula, it becomes apparent that only one variable a3actually influences the truth value. As a result, obtaining test cases becomes simpler (one with and the other with a3 == false). Moreover, it becomes evident that the code is not optimized, as it is odd to have a complex logical expression depending on only one variable. Unfortunately, such situations are quite common in reality, and the example provided is relatively straightforward.

In conclusion:

  • 2 tests if exhaustive testing is used

  • 2 tests with the MC/DC method

  • 2 tests if branch coverage is applied

In general, logical expressions can be simplified (minimized) using algebra, mathematical methods, and algorithms. There are at least three similar methods. Direct transformations using boolean algebra, as explained above, always work. Methods of expression minimization that consider the specific domain's characteristics relying not only on math and logic but also on the domain's peculiarities, can be found and applied.

Optimizing Pairwise Testing

The pairwise testing method involves generating test sets in such a way that instead of testing all possible combinations of input parameters through exhaustive testing (which can be time-consuming and resource-intensive), test sets are designed so that each parameter value combines with every value of the other tested parameters at least once. This significantly reduces the number of test cases.

It is a well-established and often-used method. However, unfortunately, this method does not always work as systems become more complex. The question arises: is pairwise testing sufficient to thoroughly test a complex system with numerous input parameters? This question has intrigued many testing professionals and researchers, including the National Institute of Standards and Technology (NIST) in the United States.

  • Pairwise finds 65-97% of errors
  • 3-way finds 89-99% of errors
  • 4-way finds 96-100% of errors
  • 5-way finds 96-100% of errors
  • 6-way finds 100% of errors

According to studies, pairwise testing finds errors in 65-97% of cases. If we start combining not pairs of parameters but triples or quadruples, i.e., using k-way testing, we get a greater number of tests but also catch more errors.

For example, suppose a system has two parameters with three values each and three parameters with two values each:

  • Pairwise: 10 tests with 14% coverage
  • 3-way: 18 tests with 25% coverage
  • 4-way: 36 tests with 50% coverage
  • 5-way: 72 tests with 100% coverage

You can choose a satisfactory level of test coverage and an acceptable number of test cases.

The foundation for pairwise is orthogonal arrays containing n-tuples (pairs, triples, quadruples, ...) of values an equal number of times.

The usual foundation for pairwise and k-way testing is OA(N, V^k, t), where:

  • N is the number of rows

  • k is the number of columns

  • V is the number of different values in a column

  • t is the strength (t=2 for pairwise)

In OA, each set of t columns includes all t tuples an equal number of times.

Instead of orthogonal matrices, it is better to use covering matrices. These matrices differ from orthogonal ones in that each set of values occurs at least once, not "an equal number of times." In this case, there are slightly fewer tests. Incorrect test cases may arise with covering matrices, but overall, the testing process is much faster. Thus, the testing process is significantly simplified.

CA(N, V^k, t), where:

  • N is the number of rows
  • k is the number of columns
  • V is the number of different values in a column
  • t is the strength (t=2 for pairwise)

In CA, each set of t columns includes all t tuples at least once. Using covering matrices allows moving from pairwise to k-way testing without significantly increasing the number of tests.

System States and Testing of Changing System States

Usually (almost always), a system has more than two states: "working" and "not working." Let's consider a part of the states that have a stock order. An order to buy or sell stocks must go through a series of states for the transaction to be completed. First, the order is created, then it is confirmed by the exchange, followed by numerous small purchase transactions, and finally, the required amount of stocks is bought or sold. All states of a stock order are reflected in trading systems, and all transitions and states must be tested, of course.

Almost always, either all states or all transitions are tested, but more often than not, both are verified. Full coverage is achievable but will be time-consuming, expensive, and resource-intensive.

Graphs and Finite Automata

Let's consider the traveling salesman (commi voyager) problem and the de Bruijn algorithm. It is enough to understand that the algorithm allows obtaining an optimal or sufficiently optimal set of short paths that can be traversed in a graph to cover it completely (strictly speaking, any other algorithm that accomplishes something similar can be used, or one can invent a custom algorithm).

  • First, take the initial states of the system and build a new graph where the vertices correspond to transitions in the original graph.
  • Next, cover the vertices of the new graph, i.e., transitions in the old one.
  • Some paths are obvious and turn out to be quite short (which is very convenient for testing states and transitions of the system).
  • Continue building other paths. As a result, they may become too long (and that's not good).

Let's consider the following example to analyze the situation:

There are three testers. The first one will perform the first test, the second – the second test, the third – the third test. The first two will complete the first two tests quite quickly because the paths are short (in comparison with the third one, as the first two paths are short), but the last one will take a very long time (as the third path is very long).

If the de Bruijn algorithm is applied, the third sequence can be "cut" into several shorter sequences, and the execution of all tests can be parallelized efficiently.

We can end up with more tests, but in the case of parallel execution, testing will finish much faster because tests are shorter.

Moreover, with more tests, there is more flexibility in their execution. All tests can be run simultaneously, or uninteresting and less important tests can be removed. Higher priorities can be assigned to tests that pass through the most important states of the system. There are many ways to leverage the results of the algorithm.

As a plus, the algorithm does not use domain-specific things; it works with absolutely abstract states and transitions of the system.

In this technique, much depends on how the algorithm is used. In the most extreme case, testers may know nothing about the logic behind transitions between states. In such a situation, the long chain of state transitions will be "cut" by the algorithm into several short ones. Some of these chains may turn out to be meaningless. Therefore, the obtained chains need to be evaluated for reasonableness and those that are important and meaningful for testing. Meaningless and unimportant, but possible paths of changing the system's states can provide an understanding of which part of the system needs modification, and it will be evident which part exactly.


Key conclusions can be considered as follows:

  • Algorithms for minimizing logical expressions provide maximum test coverage with minimal effort. It is not always necessary to use minimization algorithms - sometimes it’s a waste of time; there exist universal approaches.
  • Complete test coverage could be achieved with a small set of short test cases that are easy to parallelize, automate, flexible, and independent.
  • Analysis of statistics on application usage in real conditions allows for optimizing and adapting existing test design techniques and achieving the necessary test coverage, guaranteeing application quality.
  • Modification of pairwise testing allows for more profound testing with greater test coverage than standard algorithms and with almost the same resources.
  • Some algorithms might be effective in cases where standard test design techniques are less efficient.
  • There are some challenges in the failures of combinatorial test design techniques.
  • The obtained algorithms can be easily adapted to different systems and do not require special knowledge of the domain.

As for minor drawbacks, it's worth noting:

  • Some algorithms are effective and relevant not for all cases; in many situations, standard test design techniques are equally or sometimes even more effective.
  • Applying algorithms requires some mathematical knowledge and sometimes more time for their application so it also may become a vital factor in many situations.

As a QA professional, understanding these nuances is important. While theoretical in some cases, understanding the complexity of combinatorial test design techniques allows QA professionals to effectively test the complicated business logic of apps and deliver high-quality software to their users.


Written by shad0wpuppet | I'm a Software QA Team Lead and Engineer/Analyst with 10+ years of experience working with all sorts of web apps
Published by HackerNoon on 2024/01/24