Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

End-to-End Codeless UI Testing of AjaxSwing With Screenster

DZone's Guide to

End-to-End Codeless UI Testing of AjaxSwing With Screenster

It's not always easy to find tool that fits your needs. Find out how one team of web devs built their own test automation tool when thy ran into this problem.

· Web Dev Zone
Free Resource

Get the senior executive’s handbook of important trends, tips, and strategies to compete and win in the digital economy.

Have you ever dealt with regression testing of a UI-centric project? If yes, you know what a terrible pain in the... ahem, neck it can be. The more so when it comes to CSS testing for a single-page application, or an interaction-heavy website built with React or Angular.

So how exactly do you regression-test a complex UI project? Focus on happy paths and hope for the best? Maybe run a screenshot comparison module like PhantomCSS? In some cases you need a dedicated solution, not just an add-on to a general-purpose tool. In this post, we’re going to talk about one such case.

The Case of AjaxSwing

UI regression testing has been a major challenge for the team working on our product AjaxSwing. In fact, visual regressions have nearly become our nemeses, and we had to go out of our way to overcome them. But let’s start from the very beginning…

Thing is, AjaxSwing is all about web interfaces. This tool is, in essence, a web deployment platform that generates web UIs for desktop applications. The UI is always new for every new app, and because we’re talking about desktop applications, the UI is always a complex one.

Now imagine what it takes to ensure sufficient test coverage for a product of this sort. Layout issues, overlapping texts, dynamic UI regions — if I had a day, I couldn’t list all the little problems that needed attention.

Whenever someone changed something in the CSS, we were worried that something got broken, but we rarely knew where and whether it was the case. Our QA team felt like engineering troops in a minefield, and using Selenium was like disarming mines with a stick. Obviously, we had to find a better solution.

Our Quest for the Holy Grail of UI Testing Automation

Our first bet was finding a decent alternative to Selenium. We’ve played around with Phantom, higher-level frameworks, and screenshot comparison tools, but we quickly understood these tools didn’t quite work for us. None of them could provide us with good CSS verification. Besides, writing and maintaining hundreds of hand-coded tests didn’t seem like a productive process.

Long story short, we started shopping for an IDE.

Much like WebdriverCSS or PhantomCSS, record-playback IDEs (e.g. the ones in LeanFT and Ranorex) do a decent job at catching functional bugs. They record sequences of user actions, and they replay these sequences during regression testing. Unlike WebdriverCSS or PhantomCSS, creating UI tests with record-playback IDEs doesn’t require coding.

Okay, I know what you’re probably asking yourself at this point — what’s so bad with coding? As a matter of fact, nothing. Coding is fun, and most QAs are comfortable with hand-coded tests. On the flip side, not having to code saves you weeks. Besides, everyone would win if more experienced programmers coded features, not UI tests. So while hand-coding your tests is great in general, it’s not that great in terms of ROI.

Going back to our quest for the Holy Grail, IDEs did look promising, but only at first sight.

A major dealbreaker for us was that with IDEs, we would still end up maintaining test code generated by the tool. Needless to say that dealing with auto-generated code implied even more maintenance pain. Aside from this, we never felt that a simplistic screenshot comparison was enough for end-to-end UI testing.

Besides, none of these solutions did a great job at visual verification because they didn't support dynamic content, moved content, etc. Expecting UI to remain unchanged to each pixel is a recipe for constantly breaking tests.

The Unexpected Virtue of Finding Yourself at a Dead End

So, basically, we needed a reliable IDE without the major drawbacks of IDEs. We were looking for a fast UI-driven solution that wouldn’t depend on programming for test creation and maintenance. Finally, this solution would provide an intelligent screenshot comparison that doesn’t result in false positives every time a pixel changes.

Sadly, a solution of this sort simply didn’t exist at the moment. We found ourselves at a dead end, yet we felt optimistic about it. After all, nothing prevented us from building this solution ourselves.

Meet Screenster, Our Very Own Visual Regression Testing Tool

Fast forward three years — and a hundred gallons of coffee — and the project created for internal needs seemed good enough for a public release. At that point, we had about 50 visual tests running in our CI, and we were adding new ones regularly.

The platform proved itself capable of streamlining the UI testing process for AjaxSwing, which meant it could easily handle Ajax-based web apps with automatically-generated IDs. Obviously, UI testing of a website was a piece of cake with Screenster.

As far as functionality is concerned, Screenster enables QAs to run visual regression testing and CSS testing. And it does it in a simpler way that any other tool:

  • Unlike PhantomCSS and WebDriverCSS, it doesn’t depend on your QA’s coding skills, and it doesn’t require fine-tuning or tinkering with separate modules during setup.

  • Unlike old-school IDEs, it’s fully web-based, and it works with a shared server that’s running on the cloud or on premise. Again, zero installation and setup pain as compared to enterprise IDEs.

  • Unlike other modern web-based IDEs, it doesn’t need browser plugins or extensions of any sort. Unlike some cloud-based IDEs, it doesn’t make you do verifications manually.

But that’s not where the differences end. Here are the major features that help Screenster stand out:

  • Codeless Visual Testing. The platform caters to manual QA specialists and non-technical users. Being a record-playback-verification solution, Screenster records real-life interactions with the UI and stores the screenshots of UI states. These screenshots become the Visual Baseline used during further regression test runs.

  • Smart Comparison. When comparing UI screenshots, the platform detects layout shifts, even if they come down to one pixel. At the same time, it uses anti-aliasing to ignore insignificant differences caused by rendering.

  • DOM Comparison. In addition to capturing screenshots of web pages and UI states, the platform analyzes their DOM structure. It builds a complete tree of DOM parents for every element, effectively eliminating the need for locators.

Read this guide to learn everything you need to know about RPA, and how it can help you manage and automate your processes.

Topics:
web dev ,regression tests ,test automation ,end to end testing

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}