DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Testing, Tools, and Frameworks
  4. How to Avoid False Positives and False Negatives in Test Automation

How to Avoid False Positives and False Negatives in Test Automation

False positives and false negatives are both very real possibilities when it comes to test automation. However, there are some ways to avoid these issues.

Sofia Palamarchuk user avatar by
Sofia Palamarchuk
·
Apr. 18, 16 · Opinion
Like (2)
Save
Tweet
Share
8.61K Views

Join the DZone community and get the full member experience.

Join For Free

When dealing with automation, one of its most delicate subjects is the results that lie, otherwise known as false positives and false negatives. Those who have already automated know this to be an issue and those who are about to begin, let us give you fair warning that you will encounter this problem. What can we do to avoid false positives and negatives in test automation? What can we do so that the test case does what it is supposed to do? Doesn’t that sound like, testing?

These definitions come from the medical field:

  • False Positive: an examination indicates a disease when there is none.

  • False Negative: an examination indicates everything is normal when in fact the patient is sick.

If one were to translate this to our field, we could say the following:

  • False Positive: when a test is executed and despite it running correctly, the test tells us there is an error (that there is a disease). This adds a lot of cost, as the tester will search for the nonexistent bug.

  • False Negative: when the execution of a test shows no faults even though there is a bug in the application. This, as much as the false positive, can be due to an incorrect initial state of the database or problems dealing with the test environment setting.

If we believe that the false positive is a problem due to the extra costs, with a false negative, errors are there but we are not aware of them and we feel at ease! We trust all functionalities are covered and that they are being tested, therefore they must not have any mistakes.

We obviously want to avoid results lying to us! No one likes lies. Automated test case results are expected to be reliable so that we can be assured that we aren’t wasting time on checking whether the results are correct or not.

The only choice is to carry out a proactive analysis, checking the quality of our tests and anticipating possible errors. We must be actually thinking about the test and not simply doing a record and playback.

To lower the risk of environment or data problems, we should have a controlled environment only accessible through automated tests. With this we are already avoiding some major headaches because if the data is constantly changing, we will not be able to reproduce problems detected by the tests and we won’t be able to find out what’s the matter with them.

Moreover, we should check the actual test cases! Because who can assure us they are programmed correctly?

And who better than us testers to test them?

In Search of False Positives

If the software is “healthy”, and we don’t want it to display any errors, we must make sure the test is testing what it wants to test. So, we must verify the starting conditions just as much as the final ones. Meaning, a test case tries to execute a determined set of actions with certain input data to verify the outgoing data and the final state, but it is highly important (especially when the system we are testing uses a database) to make sure the initial state is what we expected it to be.

Therefore, if for example, we are creating an instance of a particular entity in the system, the test should verify if that information already exists before beginning the execution of the actions to be tested, because if so, the test will fail (due to duplicate key or similar) but in reality the problem is not with the system but with the data on the test. We have two options: checking if it exists, and if so, we’ve already used that information, or we finish off the test by saying the result is “inconclusive” (or are pass and fail the only possible results for a test?).

If we make sure all the things that could affect our result are in place, just as expected, then we will reduce the percentage of errors that aren’t errors.

In Search of False Negatives

If the software is “sick,” the test must fail! One way of detecting false negatives is to insert errors into the software and verify that the test case finds the mistake. This goes in line with mutation testing. It is very difficult when not working directly with the developer to input the mistakes into the system. It’s also quite expensive to prepare every error, compile it and deploy it, and so on, and to verify that the test finds that fault. In many cases, it can be done by varying the data of the test, or playing around with different things. For example, if I have a plain text file as input, I can change something in the content of the file in order to force the test to fail and verify that the automated test case finds that error. In a parameterizable application, it could also be achieved by modifying some parameter.

The idea is to verify that the test case realizes the mistake and that’s why we try to make it fail with these alterations. Anyway, what we could at least do is think about what happens if the software fails at this point, will this test case notice it, or should we add some other validation?

Both strategies will allow us to have more robust test cases, but keep in mind: would they be more difficult to keep up later? Of course this will not be done to every test case we automate, only to the most critical ones, or the ones really worthwhile, or perhaps the ones we know will stir up trouble for us every now and again.

Do you have other methods to prevent and detect false positives and false negatives in test automation?

Testing Test case Test automation

Published at DZone with permission of Sofia Palamarchuk, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • How To Check Docker Images for Vulnerabilities
  • A Brief Overview of the Spring Cloud Framework
  • How Observability Is Redefining Developer Roles
  • Fraud Detection With Apache Kafka, KSQL, and Apache Flink

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: