{{announcement.body}}
{{announcement.title}}

A/B Testing: Validating Multiple Variations

DZone 's Guide to

A/B Testing: Validating Multiple Variations

Validate multiple variations at once in A/B testing easily with visual testing.

· DevOps Zone ·
Free Resource

Variation

Can you spot the difference? Is there one?

When you have multiple variations of your app, how do you automate the process to validate each variation?

A/B testing is a technique used to compare multiple experimental variations of the same application to determine which one is more effective with users. You typically run A/B tests to get statistically valid measures of effectiveness. But, do you know why one version is better than the other? It could be that one contains a defect.

You may also enjoy:  A/B Testing and Web Performance

Let's say we have two variations, Variation A and Variation B, and Variation B did much better than Variation A. We'd assume that's because our users really liked Variation B.

A/B split testing

But what if Variation A had a serious bug that prevented many users from converting?

The problem is that many teams don't automate tests to validate multiple variations because it's "throwaway" code. You're not entirely sure which variation you'll get each time the test runs.

And if you did write test automation, you may need a bunch of conditional logic in your test code to handle both variations.

What if instead of writing and maintaining all of this code, you used visual testing instead? Would that make things easier?

Yes, it certainly would! You could write a single test, and instead of coding all of the differences between the two variations, you could simply do the visual check and provide photos of both variations. That way, if either one of the variations comes up and there are no bugs, the test will pass. Visual testing simplifies the task of validating multiple variations of your application.

visual testing

Let's try this on a real site.

Here's a website that has two variations.

Two variations

There are differences in color as well as structure. If we wanted to automate this using visual testing, we could do so and cover both variations. Let's look at the code.

package variations;

import com.applitools.eyes.RectangleSize;
import com.applitools.eyes.selenium.Eyes;
import com.applitools.eyes.selenium.StitchMode;
import io.github.bonigarcia.wdm.WebDriverManager;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;

public class ABVariationTests {

    private WebDriver driver;
    private Eyes eyes = new Eyes();

    @BeforeClass
    public void launch(){
        WebDriverManager.chromedriver().setup();
        driver = new ChromeDriver();
        driver.get("https://abtestautomation.com/");
    }

    @Test
    public void testABVariation(){
        eyes.open(driver, "A/B Company", "a/b test", new RectangleSize(1200, 890));
        eyes.setForceFullPageScreenshot(true);
        eyes.setStitchMode(StitchMode.CSS);
        eyes.checkWindow();
        eyes.close();
    }

    @AfterClass
    public void close(){
        eyes.abortIfNotClosed();
        driver.quit();
    }
}


I have one test here which is using Applitools Eyes to handle the A/B test variations.

  • On line 27, I open Eyes just as I normally would do
  • Because the page is pretty long, I make a call to capture the full page screenshot on line 28
  • There's also a sticky header that is visible even when scrolling, so to avoid that being captured in our baseline image when scrolling, I set the stitch mode on line 29
  • Then, the magic happens on line 30 with the checkWindow  call which will take the screenshot
  • Finally, I close Eyes on line 31

After running this test, the baseline image (which is Variation B) is saved in the Applitools dashboard. However, if I run this again, chances are that Variation A will be displayed, and in that event my visual check will fail because the site looks different.

Setting Up the Variation

In the dashboard, we see the failure which is comparing Variation A with Variation B. We want to tell Applitools that both of these are valid options.

Applitools analysis

To do so, I click the A/B button which will open the Variations Gallery. From here, I click the Create New button.

Variations Gallery

After clicking, the Create New button, I'm prompted to name this new variation and then it is automatically saved in the Variation Gallery. Also, the test is now marked as passed. Now in future regression runs, if either Variation A or Variation B appears (without bugs), the test will still pass.

Regression run

Another thing we can do is rename the variations. Notice the original variation is named Default. By hovering over the variation, we see an option to rename it to Variable B, for example.

Baseline variation

I can also delete a variation. So, if my team decides to remove one of the variations from the product I can simply delete it from the Variation Gallery as well.

See It In Action!

Check out the video below to see A/B baseline variations in action.


Also published on Medium.

Further Reading

What Is Visual Testing? A Definitive Answer [and Approach] 

8 Thoughts on A/B Testing

A/B Testing: You're Doing It Wrong

Topics:
a/b testing ,validate ,variations ,software testing ,test automation ,test automation tools ,visual testing ,visual testing best practices ,devops

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}