DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Join us tomorrow at 1 PM EST: "3-Step Approach to Comprehensive Runtime Application Security"
Save your seat
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Testing, Tools, and Frameworks
  4. Q: When Do We Prefer Manual Testing Over Automated Testing? A: Hmm....

Q: When Do We Prefer Manual Testing Over Automated Testing? A: Hmm....

To answer this, we need to question our situation and build a deeper model of our actual aims, options, and capabilities.

Alan Richardson user avatar by
Alan Richardson
·
Oct. 28, 16 · Opinion
Like (0)
Save
Tweet
Share
3.01K Views

Join the DZone community and get the full member experience.

Join For Free

i have a simple rule: “i might not automate if i have to create.” however, i have done the opposite. i have automated when others would not, and i have waded in when others would have automated. to answer this question, we need to question our situation and build a deeper model of our actual aims, options, and capabilities.

q: when do we prefer manual testing over automated testing?

a: i have a problem providing a simple answer to this question…

and not just because it depends, but also because that’s not the model i use, so i don’t frame the question like that. i don’t think in terms of “manual testing” and “automated testing.”

i think in terms of “testing,” “interacting directly with the system,” “automating predefined tasks, actions, and paths,” “using tools,” etc.

testing

if i’m testing, then i ask myself the following questions:
  • do tools already exist to help?
  • do i have abstractions that cover those areas?
  • do i have to create tools?
  • do libraries already exist?
i might automate heavily if i already have abstractions. my testing might involve lots of automating if tools already exist. i might not automate if i have to create. when and if i create, the testing has to wait.

don’t automate under these circumstances

so, what about scenarios where you might expect “don’t automate” as my default answer? the following are situations in which i wouldn't suggest automating.
  • it's something that we rarely need to do.
  • it's something that has to do with usability in terms of look and feel.
  • it has to do with experimental features.
  • it's something that's still under development and that changes frequently.
many people conduct stress and performance testing rarely. they generally automate. usability and “look and feel” have tools to detect standards non-compliance. i might very well use those. so, i might automate portions of this type of testing early.

experimental features might fit into an existing process, so we might already have a lot of the scaffolding that we need to support minimal and speedy automating, in which case i might automate those.

i have automated features that were changing frequently because we were testing the same path through the application many, many times (every time it changed) and we often found problems (every single release). the effort of testing it by interacting with it was a waste. so, we didn’t interact with it until the automated execution could pass reliably. however, that also depends on what in the implementation changes and how that change impacts the approach chosen to automate it.

what’s the real question?

if i have to create stuff to help me automate as part of my process, then i’m likely to weigh up the cost of automating versus just getting on and interacting with it.

are the assumptions behind the scenarios true?
  • scenario: it's something that we rarely need to do.
  • assumption: we'd spend a lot of time automating something that we would rarely use.
perhaps the feature is so experimental that the easiest way to interact with it is by automating it. otherwise, we have to spend a lot of time building the scaffolding required (gui) for a user to interact with it.
  • scenario: it's something that has to do with usability in terms of look and feel.
  • assumption: this is subjective.
perhaps we rarely execute it because of the time and energy involved in setting up the environment and data rather than the tooling. perhaps we need to automate more to allow us to test it more frequently.
  • scenario: it has to do with experimental features.
  • assumption: we write code to automate it that we throw away.
perhaps standards exist to guide us, and if we ignore those and all the existing tooling, then we create something so subjective that no one else can use it.
  • scenario: it's something that's still under development and that changes frequently.
  • assumption: automating a scenario always takes longer than interacting with it and maintenance in the face of change takes too long.
perhaps, the gui changes infrequently and the change takes place in the business rules behind the gui. so, really, we are dealing with a single path, but lots of data. additionally, perhaps we can randomize the data and implement some generic model-based rules to check results.

scenarios have more nuances in the real world

very often, we discuss these questions as hypothetical scenarios. if we do, we need to drill into that scenario in more detail in order to come up with our answer. one of the main benefits of the exercise of "hypothetically…" is the asking of questions and fleshing out of a model to better explore the scenario.

conclusion

i have automated when other people said i should not, and it saved time. i have interacted manually using tools when others said we should automate, and we found problems we would never have found if we had taken the automation path so early. i have also manually engaged in stress and performance testing.

sometimes, conducting a contrary experiment will provide you with examples that make it harder for you to answer these scenarios without extensive model building.
Manual testing

Published at DZone with permission of Alan Richardson, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Public Cloud-to-Cloud Repatriation Trend
  • Spring Boot Docker Best Practices
  • Project Hygiene
  • Fraud Detection With Apache Kafka, KSQL, and Apache Flink

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: