{{announcement.body}}
{{announcement.title}}

Leveraging The Power Of Testers

DZone 's Guide to

Leveraging The Power Of Testers

At its core, the difference between developers and testers comes down to two factors: starting point and mindset.

· Performance Zone ·
Free Resource

If you’re a developer, you probably don’t appreciate the power of testers. In fact, you probably think negatively about some aspect of your testing team. They think of bugs that you haven’t considered. They try doing things you never designed your product to handle. And, they file bugs when they encounter unexpected behavior. What a pain!

If you’re a tester, you may not get respect from your development teammates, but you don’t quite know why. After all, you’re good at your job. You put the product through its paces and do your best to expose issues in the product’s behavior. Your job, in fact, is to do this before someone outside the company tries the same thing.

At its core, the difference between developers and testers comes down to two factors: starting point and mindset.

Developer Power: Developers Make

A developer starts at nothing and creates something. Whether the developer creates an entirely new product from scratch, or simply adds a new feature, the end result didn’t exist previously. The development mindset involves filling the void where nothing existed previously.

A good developer thinks of all the ways a customer can use a product. In some cases, developers work to the product spec created by a product manager or product architect. In other cases, developers consider behaviors that a user might try and determine how to handle those behaviors.

Because developers venture into the unknown, they look to product experts to guide their development.  Product managers often must specify the behavior of the product explicitly so developers can code effectively. When developers ask “What if…” questions of the product experts, (e.g. “What if the user does…”), they delegate code behavior to others who know what they intend to make.

Tester Power: Testers Break

In contrast to developers, who start with nothing, testers start with “completed” code. In their mindset, testers see code in an unfinished state until proven ready. Testers work to determine how a future user might get that code to misbehave. In other words, the developer begins with blank space and creates a valuable behavior, but a tester begins with supposedly valuable behavior and finds all the ways a user or another actor could diminish that value.

Do you know those commercials where Mercedes demonstrates the safety of their cars by testing them in a collision? Testers love that commercial.

Good testers assume that users might be bad actors and attempt to do something that the developer might not expect. Much of the time, the product team had not considered that behavior, so they wrote no specification for it.

As a recovering product manager, I tried to draw the distinction between defining the behavior and requirements for the customer benefit versus the product design. I wasn’t the product designer, I thought; however, while I worked with a bunch of competent developers, none of them were designers, either.

Alas, I got the questions about usability, security, and other non-functional specifications. In that situation, I did the only thing useful I could think of – rely on the power of testers on the QA team. And, fortunately, they had prior test experience so they could identify potential failure modes and help describe trade-offs in design.

School Of Hard Knocks

Nobody really thinks about what it takes to become a test engineer. Generally, you cannot study and succeed. When do you learn to spot failure? How can you see quicksand where other engineers see a flat path. You have to know how many possible failure modes actually exist, and how many of those your design team has considered.

In one of my favorite movies from the 1980s, Body Heat (Rated “R” for a reason), the protagonist, a public defender, gets asked by his girlfriend to help kill her husband. The public defender had previously defended an arsonist, who is now out of jail. The attorney figures a fire would help cover up the murder, so he meets with the former arsonist to ask what would take to start a fire. To which, the arsonist says:

“Are you thinking about committing a crime, counselor? If so, ‘you have to realize there are probably 50 ways you could [make a mistake and get caught], and you’re a genius if you can think of 25. And you ain’t no [bleeping] genius.’

“Do you know who told me that?” the arsonist continues. “You did, counselor.”

It’s the same point for product designers. You often must experience design failures before you can spot potential future design failures. Often, however, the failure experience accumulates with test engineers.

If you want to hear two engineers discussing the power of testers in some detail, listen to Angie Jones of Applitools talk with Dr. Nicole Forsgren of Google in their webinar: Test Automation as a Key Enabler for High Performing Teams. Both of them explain how testers bring unique perspectives to their teams.

The Heartbleed OpenSSL Example

Do you remember the “Heartbleed” bug in OpenSSL? One of my favorite xkcd cartoons gives a great overview. To ensure that a connection was set up properly, the initiator sent a test key of arbitrary length and number to the receiving side. The number was supposed to be the length of the sent test key. 

The receiver would send the number of bytes back according to the value. To exploit the bug, the value greatly exceeded the length of the test key – exposing the SSL stack of the receiving side back to the exploiter.

How would you know to look for that kind of bug?  It existed in the wild for years before the exploit finally got exposed.

The power of testers comes from imagining scenarios like this in a design and considering the risks to future designs.

Failure Modes

What failure modes do you consider for your testing? Designers design for the failure modes they can think of or have experienced. Testers test for failure modes they have experienced or understand. Generally, you defer to the most experienced person on the team.

Why experience? Because tools alone cannot help you. For example, you can measure execution coverage in your code. But, coverage tools won’t always tell you when you have potential bugs in your code. For example, you can execute 100% of your code and exercise all the expected input cases, but an unexpected input causes an unanticipated result.

You can also run functional tests that become out of date but still pass. A number of Applitools customers have experienced what happens when they don’t validate the visual behavior of their functional application tests after an app upgrade. New code comes out that can change the on-screen app rendering, but the underlying HTML and identifiers don’t reflect any change in the code. Tests pass, but the screen gets filled with visual errors. And woe to you if you release the app with those errors in the wild.

So, as failure modes continue to evolve, you need the combination of experience and tools to help you stay ahead.

Web Security Exploits

Web security exploits form their own class of design bugs. The Open Web Application Security Project (OWASP) keeps track of the top exploits. The most recently published list came out in 2017. These include:

  • A1:2017-Injection
  • A2:2017-Broken Authentication
  • A3:2017-Sensitive Data Exposure
  • A4:2017-XML External Entities (XXE)
  • A5:2017-Broken Access Control
  • A6:2017-Security Misconfiguration
  • A7:2017-Cross-Site Scripting (XSS)
  • A8:2017-Insecure Deserialization
  • A9:2017-Using Components with Known Vulnerabilities
  • A10:2017-Insufficient Logging&Monitoring

OWASP even offers a testing guide for exploits:

https://www.owasp.org/index.php/OWASP_Testing_Project

Again, these are the kinds of issues one learns from experience.

Continuous Deployment Teams

One place where you can find developers and testers collaborating successfully is in strong agile development teams. On agile teams, a “shift-left” approach places quality engineers in the midst of development teams or pods. 

The quality engineer, as a coach, distributes the power of testers to help the developers design usable, testable code. In this type of organization, the whole team takes responsibility for testing code, as they know each build can, in fact, be released.

Elisabeth Hocke, a testing guru on agile test and principle agile tester at Flixbus, teaches a great course about The Whole Team Approach to Continuous Testing on Test Automation University. In this course, she explains how continuous delivery demands the team mindset for testability.

Priyanka Halder, head of quality at GoodRX, gives a great webinar about High-Performance Testing – Acing Automation in Hyper-Growth Environments. She talks about how her team embeds into the development team as a practical application of continuous delivery. 

This may not be your organization. Just know that there are organizations where developers value the contributions of the test team, and quality engineers help coach the developers to build testable code.

Seek Wisdom In Experience

If you are thinking to yourself, “I have no idea how to find these kinds of issues in my designs,” you need the power of a tester with experience.

Realistically, quality engineers face a myriad of challenges that developers cannot always help solve. The OpenSSL Heartbleed exploit existed for years before someone understood the problem. Because developers can blind themselves to failure modes, quality engineers have to spend time imagining a horrible future.

New approaches help bridge the gap between development and test. New frameworks help developers build testability hooks into each of their applications. Standardized approaches help ensure that all changes can be anticipated.

At Applitools, we address specific blindness. Web apps include built-in complexity that you might sometimes ignore because of standard. HTML, CSS, and JavaScript have standards, so developers expect that they only need to code once for all platforms.

Realistically, though, today’s testers know that rendering engines behave differently across platforms, viewport sizes, browsers, and operating systems on both mobile devices and computers.  In the end, pixel and DOM comparisons lead to so much extra work that testers limit their automation to validating expected behaviors – blinding themselves to unexpected behaviors.

Conclusion

Whether you primarily develop or primarily test, you will always feel a tension between your mindset and the mindset of the person on the other side. After all, they seem to be slowing you down or, alternatively, giving you more work to finish.  And, most people have difficulty seeing how more work today can eliminate rework tomorrow.

But, as the continuous delivery examples show – successful teams can come from these opposite views collaborating. Wherever you find yourself on this continuum, know that collaboration is likely in your future.

Topics:
collabaoration, continuous deployment, developers, performance, test automation, test code, testers, testing

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}