Prioritizing Tests and Tasks With Software Testing
QA testers are constantly under pressure to deliver quality software and track bugs, so learn how to effectively prioritize to make the most of your time.
Join the DZone community and get the full member experience.
Join For FreeThere's a lot of ground for software testers to cover on any given project, so it's understandable when even some of the most experienced quality assurance veterans feel paralysis at the outset of development. Add in the tight release deadlines that have become a hallmark of the industry, and you have a recipe for testers becoming bogged down in too much work with too little time. That's why it's never been more crucial for QA teams to properly prioritize tests and tasks.
When prioritizing software tests and QA tasks, it's important that team leaders look at what aspects of an application pose the greatest threat to its performance. Methods & Tools contributor Hans Schaefer suggested that software testers evaluate risk by looking at both the likelihood of a given component failing and the damage that such an issue would cause. Areas of the software are critical to core functionality are good places to prioritize, especially if their failure would have very real, tangible ramifications for users.
"Such failures may deal with large financial losses or even damage to human life," Schaefer explained. "An example would be the gross uncoupling of all subscribers to the telephone network on a special date. Failures leading to loosing the license, i.e. authorities closing down the business, are part of this class. Serious legal consequences may also belong here."
Prioritizing with little time
When QA teams have little or no time to carry out testing processes, they may not be able to conduct the thorough analysis needed to comprehensively determine priority levels. Under these circumstances, TechTarget contributor Scott Barber suggested that testers first quickly create a short list of the different ways the software will be used. In addition, they should include parts of the software that are either the most critical to performance, beholden to regulatory bodies or most likely to house catastrophic defects.
Software testers should then prioritize this list by need and the frequency with which certain scenarios appear. At that point, teams can then look over their test cases and shelve the ones that are not absolutely essential to meeting the needs of the prioritized list. This process shouldn't take very long and will leave QA teams with a short to-do list that will ensure the core functionality and performance of the in-development software.
Some test management methods are better than others
Under circumstances where time is of the essence, QA leaders need to be cognizant of how they approach test management as well. Not all testing methods are ideal when working under the gun and trying to value the quality of defects over quantity. Schaefer argued that teams be especially mindful of the testing strategies they leverage when attempting to maximize their output.
"The problem with most systematic test methods, like white box testing, or black box methods like equivalence partitioning, boundary value analysis or cause-effect graphing, is that they generate too many test cases, some of which are less important," Schaefer stated. "A way to lessen the test load is finding the most important functional areas and product properties."
At the end of the day, one of the most important things to keep in mind when prioritizing tests is to stay calm and approach things pragmatically. Speed is crucial, but so is accurately separating essential needs and tools from those that can be temporarily sidelined.
Published at DZone with permission of Sanjay Zalavadia, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments