Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How Visual Testing Is Transforming the Way Modern Teams Test Software

DZone 's Guide to

How Visual Testing Is Transforming the Way Modern Teams Test Software

Visual testing presents an alternative to both manual UI and regression testing to alert developers to changes in a UI.

· Performance Zone ·
Free Resource


Visual testing is the automated process of detecting and reviewing visual UI changes. Sometimes called visual regression testing or UI testing, it’s all about what your users actually see and interact with.

The visuals and UI of applications are critical parts of how our users use software, but often teams are still relying on slow, error-prone manual testing processes. The confidence gained by these manual processes is often minimal at best and lacks comprehensive coverage of applications.

Teams are using visual testing to fill the software confidence gap left by functional testing and to automate manual QA. While the goals are the same — achieving full launch confidence — visual testing approaches the problem in a completely different way, giving teams confidence that their products look exactly as they should, and empowering them to move quickly.

Visual Testing vs. Functional Testing

Over the past two decades, the automation testing space has completely exploded and is expected to be worth over $19 billion by 2023. There are tons of testing tools and development methodologies designed to ensure the quality of software at scale: unit tests, integration tests, end-to-end tests, functional tests, and so on.

Functional testing provides a level of confidence that has been crucial to how most developers ensure quality in their code, but it certainly isn’t a silver bullet. None of our current functional testing practices address what users actually see and interact with.

How can we test something like this with an automated test script?


In this example, the button text disappeared, or maybe it somehow became the same color as the button. There are several possible CSS failures and other explanations for this kind of bug.

Sometimes, developers will try to shoe-horn visual regression testing into functional tests:


The problem here is that your test suite is probably still going to pass — the button still might technically work — it’s just visually wrong.

Test automation is all about comparing actual outcomes with predicted outcomes, but in the example above, it’s hard to assert the purely visual predicted outcome with code. What are we supposed to assert? That a certain CSS class is applied? Or maybe a computed style exists on the button, or that the text is a particular color?


Even if we add that assertion, we’re still not actually testing anything visually, and there are so many things that could make this test pass while resulting in a visual regression. The class attributes could change, another overriding class could also be applied, etc.

Writing regression tests to account for all those potential changes is a slippery slope. We’ve all seen it before — teams are so afraid of regressions that they start building an overly-defensive testing culture, where every flow is considered critical and fragile assertions are piled on top of other fragile assertions.

You know that this has happened to you when you have written assertions to try to make sure something is not on the page. This is an example in a real-world app trying to do unit tests of visual features.


Soon enough, these brittle and unmaintainable tests get copied and pasted throughout your codebase, becoming detrimental to the development of your product. Imagine how hard it becomes to refactor code or redesign a layout.

Visual testing tackles this problem at its core. Visual testing doesn’t test the abstractions underneath, but the pixels themselves.

Visual Diffs 101

Visual testing works by generating, analyzing, and comparing browser snapshots for pixels that have changed. At the core of those comparisons is something we call visual diffs.

Visual diffs (sometimes referred to as perceptual diffs, pdiffs, CSS diffs, UI diffs) are the answer to knowing what pixels are changing.

At the most basic level, visual diffs are the computational difference between two images. They look like this:

The red pixels in the last image highlight what’s changed between the previous two.

Without visual testing, the only way teams catch visual changes that functional tests can’t is through manual QA.

Visual Testing vs. Manual QA

Whether you’re doing developer-level QA or have a QA team, you know that process is slow and expensive. It goes something like this: manually load up the web pages or app, run through checklists of what each page or variation should look like, and report back any bugs. Usually, they’re hunting for functional regressions, but QA testers can also be tasked with noting visual bugs as well (sometimes without any baseline for comparison besides their own memory!).

As applications and websites get more complex with countless kinds and sizes of end-user devices, manual QA slows teams down. To offset that time loss, teams end up building QA teams that end up being incredibly costly.

At the end of the day, relying on human memory to remember exactly what each page should look like is an extremely leaky bucket for bugs.

Can you spot the difference between these?


The human eye has a hard time catching all the differences, but computers can, no problem.

Did you see that the color of the arrow had changed in addition to the logo and icons?

Through image comparisons and automatic visual diff detection, visual testing does what functional testing can’t, and catches what humans won’t.

So, Why Should You Care?

By helping automate the process of visual reviews at scale, and empowering developers directly to see what they’re changing application-wide, visual testing is truly transforming the way teams can quickly build and test products at scale.

Both functional and manual testing have their place in software development, but using them for visual testing is fraught with fragile code and broken workflows. Neither of them is tailored to give confidence in how the visual side of applications and design systems change over time.

Visual testing is a new wave in software development that specifically tackles this problem. The goal of visual testing is to help product teams have true confidence in what users see and interact with.

At Percy we’re excited to be at the forefront of this movement, helping teams deliver products faster and with a level of confidence not otherwise possible with functional or manual testing.

To see how we approach this solution, check out our Getting Started documentation, and sign up for a Percy account — it’s free up to 5,000 UI screenshots each month!

Topics:
devops ,testing as a service ,qa and software testing ,visual testing ,performance ,ui

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}