Manual Testing: A Complement or Prerequisite to Automated Testing?

DZone 's Guide to

Manual Testing: A Complement or Prerequisite to Automated Testing?

Delivering a solution to market without thoroughly vetting it can result in some serious backlash.

· DevOps Zone ·
Free Resource

Just as car manufacturers wouldn't start selling to customers without first having conducted a bevy of road and safety tests, delivering a solution to market without thoroughly vetting it can result in some serious backlash. Bad software does not necessarily put users' lives at risk, but it can result in serious workflow clogs in the enterprise that can eat into bottom line, and possibly even introduce security vulnerabilities that can result in lost or stolen assets.

Given the significance of software testing, it's unsurprising that different developers have varying opinions when it comes to how to go about it. Granted, there is not necessarily a single best way to execute software testing, and the methodology will ultimately vary according to the nature of the product being tested.

This is especially true when it comes to the topic of manual testing, and its relationship to regression testing. More specifically, should everything be tested manually prior to automated regression testing? Let's take a closer look. 

A Holistic View of the Testing Process

As software development is moving to agile, and as DevOps culture continues to take hold, agile testing methodologies are becoming the norm. This means many developers and testers must dispel old back-and-white notions of manual testing versus automation testing. Both have their place in modern software development, and it's more a matter of knowing when and where to use each. One may be favorable over the other, depending on the type of test.

Thus, it makes the most sense to start by clearly defining what must be achieved with each layer or type of testing. Part of this process is to determine which tests are truly important for essential functionality, or which tests probe known issues - perhaps because they have been reported by users in past versions or in similar software. Once this is done, the process of determining exactly how they will be automated - i.e. how they will interact and test certain scripts - becomes a hands-on process. According to test management system specialist Zephyr, this is where automation integration for regression testing must be "stepped through manually."

In Other Words, It Depends on the Test

Based on this logic, not all regression tests will need to be run through manually the first time. Rather, this is only the case for especially important tests that have a more direct bearing on the user experience. These tests must be well-timed to work with the application as it progresses. Some tests, however, are black and white - either the thing is working, or it is not. These tests are not necessarily unimportant, but they are fairly straightforward, and do not have to be as thoughtfully integrated into the testing process.

Based on this, the statement that all tests must be run manually at least once prior to automating regression testing is somewhat myopic. In many cases, automated regression testing must be carefully integrated, and this requires a greater degree of expertise and manual sorting. The idea here is that if you get it right the first time, it will continue to perform the desires task at each stage of development. This helps to ensure faster time to market, and a polished product that it thoughtfully vetted.

development, integration, quality assurance, testing

Published at DZone with permission of Kyle Nordeen . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}