Our perceptions impact how we test.

Our perceptions impact how we test.


 
Testing is an art of making judgments and choices about how to evaluate certain features, what types of scripts to write and how these cases will be executed. Although agile test management can help teams accomplish all of these tasks, there could still be gaping holes when it comes to test coverage. These openings are typically caused by testers, but these professionals may not know that they are the cause.
Cognitive biases occur in everyone. These inclinations happen when people make judgments based on their own perception of certain inputs, creating their own subjective reality. Teams must understand what cognitive biases exist as well as how to avoid them to improve testing and consistently deliver quality applications.

1. Inattentional blindness

It’s very easy to miss the most obvious things when you aren’t looking for them. This describes the essence of inattentional blindness. In an interview with StickyMinds, QA consultant Gerie Owen gave the example of an experiment where participants were instructed to watch a basketball clip and count the number of passes. The viewers were so intent on this mission that half of them missed when a person in a gorilla suit did a dance on the court during the middle of the game.
This type of blindness carries over into testing because QA professionals may be so focused on looking at one thing, that they miss other very significant things that are happening. It’s important to keep an open mind and to collaborate effectively with teams to bring up issues that may not have been noticed previously, providing total requirements coverage.

2. Fundamental attribution error

This bias essentially means that people are willing to blame others for issues but aren’t as likely to turn that perspective inward when they make the same mistakes. Industry expert Jonathan Klein noted that if someone else creates a bug, he or she is often seen as negligent, but if you do it, you reason that you were tired, rushed or that the requirements were poorly defined. Cultures with this type of attitude can be very contentious and could recreate the divisions between developers and testers alike.
To combat this error, groups must actively find ways to reduce the odds of future failures. This could include committing areas to automated testing to ensure it doesn’t break in the same way again. Testers and developers must also work collaboratively to take ownership of their work as well as their faults. This will help reduce the negativity and provide a path for quality improvement.

3. Congruence bias

When testers plan and execute tests based on their own hypotheses without considering alternatives, that’s known as congruence bias. Basically, QA will validate that the functionality works in the ways that they expect, but won’t test to see if there are any variations in behavior, according to Paragon. This obviously leads to missed negative test cases and can result in some major problems, especially when the program is released to users.
Testers need to commit to some true exploratory testing, meaning that they must try to break the system in ways that the user would do it. They could do this by entering alpha characters in a numeric field and conducting other such experiments. This will help reduce the congruence bias and test out different situations to ensure that the software always functions as expected. Quality testing tools will help keep test cases aligned with associated features throughout this process.

4. Confirmation bias

Similar to congruence bias, confirmation bias tests cases that we know will work. However, this type of judgment occurs mostly in manual testing since this makes the process much shorter. It’s also important to note that it’s different from the third point because people actively go out of their way to declare that something works, only to result in other people cleaning up the mess later. This can significantly pressure the scheduling and cost associated with projects, and create tension between team members.
“This is one of the harder biases to get over in my opinion, because it means acknowledging our own limitations, and really stressing the fragile parts of the code that we write,” Klein wrote. “We all want and expect our software to work, so we are inescapably drawn to evidence that confirms this desire. Keep fighting this urge, keep testing, and always question your assumptions.”