DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • Maximizing Efficiency With the Test Automation Pyramid: Leveraging API Tests for Optimal Results
  • Utilize These Detection-as-Code Best Practices
  • Five Steps To Building a Tier 1 Service That Is Resilient to Outages
  • Optimizing Firmware: The Key to Shipping IoT Devices on Time

Trending

  • TypeScript: Useful Features
  • Creating a Deep vs. Shallow Copy of an Object in Java
  • Unleashing the Power of Microservices With Spring Cloud
  • Parallelism in ConcurrentHashMap

When Should You Automate?

Going through the various kinds of software testing and examining which are appropriate to automate.

Kyle Nordeen user avatar by
Kyle Nordeen
·
Dec. 17, 15 · Opinion
Like (1)
Save
Tweet
Share
2.56K Views

Join the DZone community and get the full member experience.

Join For Free

When it comes to software testing, automation offers its fair share of benefits. Not only do automated tests streamline workloads and cut down on time-to-market, they can also help teams uncovered insights they might have missed during manual testing.

However, there are times when utilizing automated tests is not the most ideal strategy. But how can a team discern the best times to deploy testing automation and when they should carry out manual testing? Let's take a look at the top considerations and situations that come up in connection to automated software testing:

Repeating Tests

Having to test the same feature or element more than once is a common occurrence for QA teams, particularly after updates or changes to the original version have been made. ThoughtWorks Quality Analyst Priti Biyani noted that this process can create pain points for testing teams, particularly when they have to set up a test environment and track data again and again.

This is an opportune time to utilize automated testing. If the same test is going to be carried out multiple times, automated testing not only cuts down on the time and effort required to set up and perform this test, but can help the team easily compare results.

"Perhaps the simplest opportunity is when a manual testing activity has become tedious and repetitive," independent consultant and trainer Kevlin Henney told TechTarget. "When testing requires a methodical and repeated execution, that is better offered by machine than human. If a testing activity appears to have become deskilled. use a machine to its best capabilities rather than making a monkey of human testers."

Load Testing

Mike Kelly, software development manager working at a Fortune 100 company, told TechTarget that in certain situations like load testing, automation is the only efficient and effective way to test.

"Sometimes, the only way a test can be executed is via some sort of automation," Kelly said. "Examples include load testing and traversing large amounts of paths through an application. Some things just can't practically be done manually."

Performance Testing

Kelly's point of view can also be applied to performance testing, particularly those that involve simulated user groups. When teams are tracking and observing the simulated activities of a large number of concurrent users, automation is the only way to carry out this activity effectively. Manual performance tests would require an almost insurmountable level of work on the part of testers - automation simply makes more sense in cases like these.

However, there are also times when manual testing is the preferred method. Let's take a look at a few examples:

Exploratory Testing

These types of tests simply cannot be carried out by a machine because they require certain skills on the part of the tester, including experience, logical thinking and creativity, Apica noted. During these tests, the tester may be dealing with a whole host of different issues, such as documentation that may be lacking details, or they may be working under a certain deadline. Whatever the case, exploratory testing requires a skilled, human hand.

Usability Testing

Usability tests will oftentimes not only encompass the actual functionality of the product, but how it contributes to the user experience as well. Automated usability tests will not be able to gauge the UX as a human tester would.

"Here, human observation is the most important factor, so a manual approach is preferable," Apica noted.

Lack of Expertise

If a team is not well-versed in creating automated tests, it may be best for them to perform trials manually. This is particularly true as if automated tests are not configured correctly, the team could miss out on important insights. In addition, the team could be wasting considerable time and resources as they deal with a learning curve.

"Automated testing doesn't make sense when a test team does not have the expertise to automate correctly," said Microsoft senior SDET lead John Overbaugh. "Spending days and weeks learning how to automate, making mistakes, writing brittle automation that will only work once, and coming out of the project without any usable tests is a huge waste of time."

If automated testing is required in this type of environment, the team should seek outside consultancy and assistance to ensure they are not wasting time and that tests are set up correctly with the help of a robust test management strategy.

Testing teams

Published at DZone with permission of Kyle Nordeen. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Maximizing Efficiency With the Test Automation Pyramid: Leveraging API Tests for Optimal Results
  • Utilize These Detection-as-Code Best Practices
  • Five Steps To Building a Tier 1 Service That Is Resilient to Outages
  • Optimizing Firmware: The Key to Shipping IoT Devices on Time

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: