Move Fast, Break Things: How to Test the Limits of Your Web App
This tips will clear up misconceptions about software testing and give tips for exploratory testing to find bugs in your website.
Join the DZone community and get the full member experience.Join For Free
When going through an application, it makes sense to perform some of the actions you would think your users would perform. But it may also make sense to perform some of the actions you wouldn't expect them to perform.
Your users are unpredictable, and the actions that cause complications with your application aren't always what you would expect or even do yourself. Sometimes, you have to test the limits of your application in order to properly break it.
Before we get started, we should make one thing clear: software testers don't actually break things. Although the reference may be common and it's a simple way to understand that a certain action resulted in a functional failure, it's not an accurate way to think about testers in real life.
There's a quote where James Bach says, "We don't break the software. We break illusions about the software."
The job of a software tester is to find where a product is broken. While something may work fine for the programmer who made it and is using it according to its intent, an application will have infinite areas for bugs can hide and users to find.
The make of a good tester comes down to being able to report these bugs, help stakeholders identify and fix issues, and contribute to the release of a product that's of a higher quality than it was when it started out. Users don't always go down the "happy path," so it's important that testers consider these different scenarios because at the end of the day, finding a problem is a good thing.
So although we might refer to it as "breaking software," it's important to understand that it's not the testers who are breaking the software, it's the application that's broken.
Often in testing, we want to follow a user journey by performing the actions that get us to an end result. For example, if you fill out the fields in a "Contact Us" box, that information will be sent to the site owner so they can follow up. If the application is working correctly, this will give us a passing result.
This passing result is a good thing because it means that in an ideal scenario, the user will fill in the information being asked as they are supposed to. But it doesn't necessarily tell us much about the edge cases - what happens when a user doesn't use the application as intended or planned. What happens if a user leaves a field blank or puts in an incorrect email address?
When we think about trying to see where an application fails, we want to look at negative testing. Negative testing ensures that your application can handle invalid inputs or unexpected user behavior. This means that you're thinking about where a user can venture off path, rather than just confirming the application works with normal behavior.
By negative testing, you can get a more insight into where bugs are hidden. And by knowing what's broken, you can build out your risk analysis to come up with a strategy for more comprehensive testing. How many of these edge cases do you feel are likely for users to follow? How detrimental would it be to the application if they were to follow them?
A good place to start with negative testing may be thinking about different personas. Everyone uses an application differently, and by paying attention to the details that give different end users unique experiences, you can curate a more intelligent approach to testing for them.
However, even when thinking about persons it can be difficult to truly put yourself in someone else's shoes and come up with those edge cases. Brainstorming a list of scenariosmight be the best way to come up with new ideas for negative testing.
How to Test the Limits of Your Web App
Inspired by this post on Reddit that contains reliable strategies for breaking stuff (plus a few of our own favorite methods), we've rounded up a few good ways to test the limits of your application.
- Start performing an action on the page. Leave it open for a long time, then try to come back and finish out the action.
- Mess around with form fields. See what happens if you enter a bunch of random numbers, hold down a key continuously, or enter emojis and gifs.
- Put a fake address where it asks for email.
- Try any illegal input you can think of.
- Enter the least likely inputs or large data strings.
- Enter invalid data into promotional or payment fields.
- Leave fields completely blank.
- Cancel and remove items from your cart.
- Add as many of one item as you can to your cart.
- Reload the page.
- Close out of the page. Reopen the page.
- Shut down your computer. Reopen the page.
- Automate data from the big list of naughty strings
- Hit a submit button as many times as you can before the page changes.
- Try and get through a test case as fast as possible.
- Try and get through after navigating away from the page multiple times or opening up new tabs.
- Test after a few beers.
- Don't follow directions; do the opposite.
- Take it to mobile and see what happens when you have multiple apps open or are low on battery.
- Turn it horizontal.
- Have someone call or text you while using the application.
- Simulate 10x the number of users that regularly go to your site.
- Go backward. Then forwards. Then back again. Resume.
- See what your website looks like in Internet Explorer
- Try using your application with different plugins, integrations, and third-party apps.
While the goal of developing and testing is to release high quality, functional software, sometimes you have to find the places it's broken first.
If you're looking to try out negative testing, this is a good place to start. But the true value (and the fun) of negative testing comes from using your curious mindset and creativity to test the limits, especially during exploratory testing.
Try coming up with a few of your own ways to test the limits of your application, and leave your favorite method in the comments.
This article was first published on the CrossBrowserTesting blog.
Published at DZone with permission of Alex McPeak, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.