How To Perform Exhaustive Testing

Exhaustive testing is a type of software testing approach where all the behaviors of a system are explored during the tests. If this approach is good in theory, it is not feasible in most testing contexts where time and resources are limited. This article presents the distinguishing characteristics of exhaustive testing and explain how to perform it.

Author: Nataliia Syvynska, TestMatick, https://testmatick.com/

A good deal of something is not always considered an excellent indicator of quality. Any self-respecting software tester is a true protectionist who always strives to ensure that the software is of the highest quality. It is a good trait, but sometimes it can bring undesired results. To prevent this, it is necessary to become familiar with such a concept as “exhaustive testing”.

This material will be a good help in setting professional criteria and methods for drawing up a universal software testing plan.

A frequent piece of advice that you can find in articles for aspiring testers is: “You should always test everything you see, using all the methods you know”. From the software quality assurance’s point of view, this approach is not considered bad or wrong, especially when it comes to programs for a nuclear power station or spacecraft.

Nevertheless, in the era of widespread commercial testing for a corporate (less often, government) order, with a specific budget, this can become a destructive phenomenon. By increasing the boundaries of the testing field and the number of tests to infinity, a quality assurance (QA) engineer brings the time for these tests to infinity as well. In this situation, the software tester has a checklist for a couple of hundred thousand small items, and it seems that all of them are very important, but the time to perform the tests remains the same as it was originally defined.

How To Perform Exhaustive Testing

Number of Tests

The redundancy of software testing can be caused by the following reasoning: “Can I perform some specific type of tests on this project?” 95% of answers will be: “Of course, you can”. But before you start expanding the field of testing, adding new tools and techniques to the tested environment, it is worth thinking about the meaning of the following sentence.

Any test, even the most insignificant one, is the opportunity to obtain certain data regarding the project or the checked functionality. By performing new tests, QA may either learn something new about the software, or these actions will not lead to any result.

To make it clearer, let’s analyze an example that can be encountered in real life.

During the interview, the candidate was asked to create approaches to testing a standard WordPress site (this is a kind of question that has no wrong answers).

The candidate, like most people at the interview, will show that he/she knows many techniques and methods of software verification. Thus, he/she will add unnecessary tests unintentionally.

For example, load testing. Why are these tests excessive? It’s simple – the received results will be completely useless because we only get information about the traffic limit of the test server. This will not tell us anything about the behavior of our site on production. And we are not talking about the issues that can be caused by a disabled server where other projects are hosted.

Probability and Uncertainty

Probability and uncertainty are common reasons that lead to significant increases in total time for testing. What is meant by these categories?

Uncertainty is everything that any tester can face on any project (both new and old ones). Did we test everything we should? Is there any functionality with some errors? If you don’t do anything with such uncertainty, then the time to check the software will grow steadily. Therefore, you should always look at the project a little differently: in a different test environment, resolution, or based on other input data.

Probability is the definite possibility of an error occurring. Defects are and will always be, but they are detected only at the time of fixation. This means that until a defect is found, one cannot be sure that it does not exist at all.

This suggests that if a tester implements any check, it automatically increases the chance to find a bug. In other words, testing is like fishing: the more nets and rods are cast, the higher the chance to catch something. Also, it is unreasonable to cast nets on the shore, where is no chance to catch fish, just as conducting unnecessary checks.

Similarly, the basic principle of testing is violated, namely the accumulation of bugs. It’s a popular 80/20 rule: 80% of defects are based on 20% of functional units. And this means that you need to look for errors right there.

Time for Tests

Initially, it may seem that all of the above is extremely far from the initial interpretation of the essence of software testing. But among other things, tests are a special activity, the essence of which lies in reducing the loss of precious time, and, as a result, the budget. Due to defects, a client loses his time, and finances, since programmers need some time to fix the errors found. Hence, increasing the verification time, its cost automatically increases. And in the most extreme situations, the cost of testing will increase, as well as the cost of the developer’s time spent editing the bug if it is found by the client.

In this situation, the testing process becomes completely meaningless. Quality assurance consultants are faced with an obvious contradiction: from the testing point of view, it is necessary to increase the number of checks; from the business point of view, they need to be reduced; but at the same time, the overall quality of the software should not be lost.

The solution for this situation will be exhaustive testing, which is always based on one of the basic principles of the process: checks depend on a context. If we have a huge budget and a lot of time, we test everything. If there is a small budget, we check only basic functionality.

What Do We Have To Do?

In such conditions, how do you determine what cannot be tested, and what needs to be checked by the residual principle? It’s great when you can discuss and agree on approaches to testing with the project manager. But in real life, it often happens that the tester is asked to check everything and so that errors are not found anywhere. This suggests that it will be the responsibility of the manager to write the criteria by which the QA will develop a list of test operations. When developing such criteria, you can take the requirements for a given project, the logic, and frequency of accesses to specific functionality.

Let’s look at each item separately.

Requirements. You always need to make your list of requirements for the project’s functionality. Sometimes, they are called “implied requirements”, that is, requirements that are absent in the documents, but are introduced from common practice and logic.

Project logic. Each project is individual. It has its purpose, functions, and objectives. If you understand the goal, you can easily identify the basic functionality that requires some attention and auxiliary one, and then test them by residual principle.

The number of users access to functionality. If users tend to use a specific browser, OS, or system platform; then they should be tested first. If it is an ongoing project, then you will most likely have useful statistics. And if it is a new one, you will have to consider generalized ideas about the popularity of gadgets, systems, and browsers.

Conclusions

If you use these principles, you can easily draw up a test plan for any project, including all the time slots for its verification. Exhaustive testing approaches (like any other approach) do not mean that you find all bugs. But on the other hand, it will assure that all the requirements are met, and the basic functionality and the most demanded parts of the project are well-behaved.