DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Modern Test Automation With AI (LLM) and Playwright MCP
  • AI-Driven Test Automation Techniques for Multimodal Systems
  • From Zero to Meme Hero: How I Built an AI-Powered Meme Generator in React
  • Ditch Your Local Setup: Develop Apps in the Cloud With Project IDX

Trending

  • Agile’s Quarter-Century Crisis
  • Modern Test Automation With AI (LLM) and Playwright MCP
  • Efficient API Communication With Spring WebClient
  • Introducing Graph Concepts in Java With Eclipse JNoSQL
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Testing, Tools, and Frameworks
  4. AI-Driven Self-Healing Tests With Playwright, Cucumber, and JS

AI-Driven Self-Healing Tests With Playwright, Cucumber, and JS

Self-healing for automation testing significantly reduces maintenance by automatically adapting to changes in the application's user interface.

By 
Priya Yesare user avatar
Priya Yesare
·
Mar. 13, 25 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
5.4K Views

Join the DZone community and get the full member experience.

Join For Free

Automated testing is essential to modern software development, ensuring stability and reducing manual effort. However, test scripts frequently break due to UI changes, such as modifications in element attributes, structure, or identifiers. Traditional test automation frameworks rely on static locators, making them vulnerable to these changes. AI-powered self-healing automation addresses this challenge by dynamically selecting and adapting locators based on real-time evaluation.

Self-healing is crucial for automation testing because it significantly reduces the maintenance overhead associated with test scripts by automatically adapting to changes in the application's user interface. This allows tests to remain reliable and functional even when the underlying code or design is updated, thus saving time and effort for testers while improving overall test stability and efficiency.

Key Reasons Why Self-Healing Is Needed in Automation Testing

Reduces Test Maintenance

When UI elements change (like button IDs or class names), self-healing mechanisms can automatically update the test script to locate the new element, eliminating the need for manual updates and preventing test failures due to outdated locators.

Improves Test Reliability

By dynamically adjusting to changes, self-healing tests are less prone to "flaky" failures caused by minor UI modifications, leading to more reliable test results. 

Faster Development Cycles

With less time spent on test maintenance, developers can focus on building new features and delivering software updates faster. 

Handles Dynamic Applications

Modern applications often have dynamic interfaces where elements change frequently, making self-healing capabilities vital for maintaining test accuracy.

How Self-Healing Works

  • Heuristic algorithms. These algorithms analyze the application's structure and behavior to identify the most likely candidate element to interact with when a previous locator fails. 
  • Intelligent element identification. Using techniques like machine learning, the test framework can identify similar elements even if their attributes change slightly, allowing it to adapt to updates.  
  • Multiple locator strategies. Test scripts can use a variety of locators (like ID, XPath, CSS selector) to find elements, increasing the chances of successfully identifying them even if one locator becomes invalid. 
  • Heuristic-based fallback mechanism. Let’s understand self-healing using a heuristic-based fallback mechanism by implementing it with an example. 

Step 1

Initialize a playwright project and install cucumber dependencies by executing the command:

Plain Text
 
npm init playwright


Adding Cucumber for BDD Testing

Cucumber allows for writing tests in Gherkin syntax, making them readable and easier to maintain for non-technical stakeholders.  

Plain Text
 
npm install --save-dev @cucumber/cucumber


Create playwright

Step 2

Create the folder structure below and add the required files (add_to_cart.feature,  add_to_cart.steps.js, and cucumber.js).

Create this folder structure

Step 3

Add code to browserSetup.js.

JavaScript
 
const { chromium } = require('playwright');

async function launchBrowser(headless = false) {
  const browser = await chromium.launch({ headless });
  const context = await browser.newContext();
  const page = await context.newPage();
  return { browser, context, page };
}

module.exports = { launchBrowser };


Step 4

Add the self-healing helper function to the helper.js file.

This function is designed to "self-heal" by trying multiple alternative selectors when attempting to click an element. If one selector fails (for example, due to a change in the page's structure), it automatically tries the next one until one succeeds or all have been tried.

JavaScript
 
// Self-healing helper with a shorter wait timeout per selector
async function clickWithHealing(page, selectors) {
    for (const selector of selectors) {
      try {
        console.log(`Trying selector: ${selector}`);
        await page.waitForSelector(selector, { timeout: 2000 }); // reduced to 2000ms per selector
        await page.click(selector);
        console.log(`Clicked using selector: ${selector}`);
        return;
      } catch (err) {
        console.log(`Selector "${selector}" not found. Trying next alternative...`);
      }
    }
    throw new Error(`None of the selectors matched: ${selectors.join(", ")}`);
  }
  
  module.exports = { clickWithHealing };


Step 5

Write a test scenario in add_to_cart.feature file.

Gherkin
 
Feature: Add Item to Cart

  Scenario Outline: User adds an item to the cart successfully
    Given I navigate to the homepage
    When I add the "<itemtype>" item to the cart
    Then I should see the item in the cart
    Examples:
    |itemtype|
    |Pliers  |


Step 6

Implement the corresponding step definition.

JavaScript
 
const { Given, When, Then, Before, After, setDefaultTimeout } = require('@cucumber/cucumber');
const { launchBrowser } = require('../utils/browserSetup');
const { clickWithHealing } = require('../utils/helpers');

// Increase default timeout for all steps to 60 seconds
setDefaultTimeout(60000);

let browser;
let page;

// Launch the browser before each scenario
Before(async function () {
  const launch = await launchBrowser(false); // set headless true/false as needed
  browser = launch.browser;
  page = launch.page;
});

// Close the browser after each scenario
After(async function () {
  await browser.close();
});

Given('I navigate to the homepage', async function () {
  await page.goto('https://practicesoftwaretesting.com/');
});

When('I add the {string} item to the cart', async function (itemName) {
  this.itemName = itemName;

  // Self-healing selectors for the product item
  const productSelectors = [
    `//img[@alt='${itemName}']`,
    `text=${itemName}`,
    `.product-card:has-text("${itemName}")`
  ];
  
  await clickWithHealing(page, productSelectors);
  await page.waitForTimeout(10000);

  // Self-healing selectors for the "Add to Cart" button
  const addToCartSelectors = [
    'button:has-text("Add to Cart")',
    '#add-to-cart',
    '.btn-add-cart'
  ];
  await clickWithHealing(page, addToCartSelectors);
});

Then('I should see the item in the cart', async function () {
  const cartIconSelectors = [
    'a[href="/cart"]',
    '//a[@data-test="nav-cart"]',
    'button[aria-label="cart"]',
    '.cart-icon'
  ];
  await clickWithHealing(page, cartIconSelectors);
  const itemInCartSelector = `text=${this.itemName}`;
  await page.waitForSelector(itemInCartSelector, { timeout: 10000 });
});


Step 7

Add the cucumber.js file.

The cucumber.js file is the configuration file for Cucumber.js, which allows you to customize how your tests are executed.

We will use the file to define

  • Feature file paths
  • Step definition locations
JavaScript
 
module.exports = {
    default: `--require tests/steps/**/*.js tests/features/**/*.feature --format summary `
};


Step 8

Update pakage.json to add scripts.

JSON
 
"scripts": {
    "test": "cucumber-js"
  },


Step 9

Execute the test script.

Plain Text
 
npm run test


Test execution result:

Test execution result

As you  see in the above screenshot, the code tried to find the selector a[href="/cart"] and when it couldn’t find the selector, the code moved on to finding the next alternative selector //a[@data-test="nav-cart"], which was successful; hence, clicking the element using the selector

Intelligent Element Identification + Multiple Locator Strategies

Let's explore with an example on how to incorporate multiple locator strategies into AI-powered self-healing tests with a fallback method. The idea is to try each known locator in a predefined order before resorting to the ML-based fallback when all known locators fail.

High-Level Overview

  1. Multiple locator strategies. Maintain a list of potential locators (e.g., CSS, XPath, text-based, etc.). Your test tries each in turn.
  2. AI/ML fallback. If all known locators fail, capture a screenshot and invoke your ML model to detect the element visually.

Below is an example of the AI-powered self-healing approach, showing how to integrate TensorFlow.js (specifically @tensorflow/tfjs-node) to perform a real machine–learning–based fallback. We’ll extend the findElementUsingML function to load an ML model, run inference on a screenshot, and parse the results to find the target UI element.

Note: In a real-world scenario, you’d have a trained object detection or image classification model that knows how to detect specific UI elements (e.g., “Add to Cart” button). For illustration, we’ll show pseudo-code for loading a model and parsing bounding box predictions. The actual model and label mapping will depend on your training data and approach.

Step 1

Let’s begin by setting up the playwright project and installing the dependencies (Cucumber and Tensorflow).

Plain Text
 
npm init playwright
Plain Text
 
npm install --save-dev @cucumber/cucumber
Plain Text
 
npm install @tensorflow/tfjs-node


Step 2

Create the folder structure below and add the required files:

Add these required files

  • The model folder contains the trained TF.js model files (e.g., model.json and associated weight files).
  • aiLocator.js loads the model and runs inference when needed.
  • locatorHelper.js tries multiple standard locators, then calls the AI fallback if all fail.

Step 3

Let's implement changes in the locatorHelper.js file.

This file contains a helper function to find an element using multiple locator strategies. If all fail, it delegates to the AI fallback. 

Multiple Locators

  • The function takes an array of locators (locators) and attempts each one in turn.
  • If a locator succeeds, we return immediately.

AI Fallback

  • If all standard locators fail, we capture a screenshot and call the findElementUsingML function to get bounding box coordinates for the target element.
  • Return the coordinates if found, or null if the AI also fails.

Step 4

In util/aiLocator.js, we simulate an ML-based locator. In a production implementation, you’d load your trained ML model (for example, using TensorFlow.js) to process the screenshot and return the location (bounding box) of the “Add to Cart” button.

JavaScript
 
const { findElementUsingML } = require('./aiLocator');

async function findElement(page, screenshotPath, locators, elementLabel) {
  for (const locator of locators) {
    try {
      const element = await page.$(locator);
      if (element) {
        console.log(`Element found using locator: "${locator}"`);
        return { element, usedAI: false };
      }
    } catch (error) {
      console.log(`Locator failed: "${locator}" -> ${error}`);
    }
  }

  // If all locators fail, attempt AI-based fallback
  console.log(`All standard locators failed for "${elementLabel}". Attempting AI-based locator...`);
  await page.screenshot({ path: screenshotPath });
  const coords = await findElementUsingML(screenshotPath, elementLabel);
  if (coords) {
    console.log(`ML located element at x=${coords.x}, y=${coords.y}`);
    return { element: coords, usedAI: true };
  }

  return null;
}

module.exports = { findElement };


Step 5

Let’s implement changes in the aiLocator.js file.

Below is a mock example of how you might load and run inference with TensorFlow.js (using @tensorflow/tfjs-node), parse bounding boxes, and pick the coordinates for the “Add to Cart” button.

Disclaimer: The code below shows the overall structure. You’ll need a trained model that can detect or classify UI elements (e.g., a custom object detection model). The actual code for parsing predictions will depend on how your model outputs bounding boxes, classes, and scores.

JavaScript
 
// util/aiLocator.js
const tf = require('@tensorflow/tfjs-node');
const fs = require('fs');
const path = require('path');

// For demonstration, we store a global reference to the loaded model
let model = null;

/**
 * Loads the TF.js model from file system, if not already loaded
 */
async function loadModel() {
  if (!model) {
    const modelPath = path.join(__dirname, 'model', 'model.json');
    console.log(`Loading TF model from: ${modelPath}`);
    model = await tf.loadGraphModel(`file://${modelPath}`);
  }
  return model;
}

/**
 * findElementUsingML
 * @param {string} screenshotPath - Path to the screenshot image.
 * @param {string} elementLabel - The label or text of the element to find.
 * @returns {Promise<{x: number, y: number}>} - Coordinates of the element center.
 */
async function findElementUsingML(screenshotPath, elementLabel) {
  console.log(`Running ML inference to find element: "${elementLabel}"`);

  try {
    // 1. Read the screenshot file into a buffer
    const imageBuffer = fs.readFileSync(screenshotPath);

    // 2. Decode the image into a tensor [height, width, channels]
    const imageTensor = tf.node.decodeImage(imageBuffer, 3);

    // 3. Expand dims to match model's input shape: [batch, height, width, channels]
    const inputTensor = imageTensor.expandDims(0).toFloat().div(tf.scalar(255));

    // 4. Load (or retrieve cached) model
    const loadedModel = await loadModel();

    // 5. Run inference
    //    The output structure depends on your model (e.g., bounding boxes, scores, classes)
    //    For instance, an object detection model might return:
    //    {
    //      boxes: [ [y1, x1, y2, x2], ... ],
    //      scores: [ ... ],
    //      classes: [ ... ]
    //    }
    const prediction = await loadedModel.executeAsync(inputTensor);

    // Example: Suppose your model returns an array of Tensors: [boxes, scores, classes]
    //   boxes: shape [batch, maxDetections, 4]
    //   scores: shape [batch, maxDetections]
    //   classes: shape [batch, maxDetections]
    //
    // NOTE: The exact shape/names of the outputs differ by model architecture.
    const [boxesTensor, scoresTensor, classesTensor] = prediction;

    const boxes = await boxesTensor.array();    // shape: [ [ [y1, x1, y2, x2], ... ] ]
    const scores = await scoresTensor.array();  // shape: [ [score1, score2, ... ] ]
    const classes = await classesTensor.array(); // shape: [ [class1, class2, ... ] ]

    // We'll assume only 1 batch => use boxes[0], scores[0], classes[0]
    const b = boxes[0];
    const sc = scores[0];
    const cl = classes[0];

    // 6. Find the bounding box for "Add to Cart" or the best match for the given label
    //    In a real scenario, you might have a class index for "Add to Cart"
    //    or a text detection pipeline. We'll do a pseudo-search for a known class ID.
    let bestIndex = -1;
    let bestScore = 0;

    for (let i = 0; i < sc.length; i++) {
      const classId = cl[i];
      // Suppose "Add to Cart" is class ID 5 in your model (completely hypothetical).
      // Or if you have a text-based detection approach, you’d match on the text.
      if (classId === 5 && sc[i] > bestScore) {
        bestScore = sc[i];
        bestIndex = i;
      }
    }

    // If we found a bounding box with decent confidence
    if (bestIndex >= 0 && bestScore > 0.5) {
      const [y1, x1, y2, x2] = b[bestIndex];
      console.log(`Detected bounding box for "${elementLabel}" -> [${y1}, ${x1}, ${y2}, ${x2}] with score ${bestScore}`);

      // Convert normalized coords to actual pixel coords
      const { width, height } = imageTensor.shape; // shape is [height, width, 3]
      const top = y1 * height;
      const left = x1 * width;
      const bottom = y2 * height;
      const right = x2 * width;

      // Calculate the center of the bounding box
      const centerX = left + (right - left) / 2;
      const centerY = top + (bottom - top) / 2;

      // Clean up tensors to free memory
      tf.dispose([imageTensor, inputTensor, boxesTensor, scoresTensor, classesTensor, prediction]);

      return { x: Math.round(centerX), y: Math.round(centerY) };
    }

    // If no bounding box matched the criteria, return null
    console.warn(`No bounding box found for label "${elementLabel}" with sufficient confidence.`);
    tf.dispose([imageTensor, inputTensor, boxesTensor, scoresTensor, classesTensor, prediction]);
    return null;

  } catch (error) {
    console.error('Error running AI locator:', error);
    return null;
  }
}

module.exports = { findElementUsingML };


Let's understand the machine learning flow.

1. Loading the Model

  • We start by loading a pre-trained TensorFlow.js model from a file.
    To improve performance, we store the model in memory, so it doesn't reload every time we use it.

2. Preparing the Image

  • Decode the image. Convert it into a format the model understands.
  • Add a batch dimension. Reshape it to match the model's input format.
  • Normalize pixel values. Scale pixel values between 0 and 1 to improve accuracy.

3. Running Inference (Making a Prediction)

  • We pass the processed image into the model for analysis.
  • For object detection, the model outputs:
    • Bounding box coordinates (where the object is in the image).
    • Confidence scores (how certain the model is about its prediction).
    • Object labels (e.g., "cat," "car," "dog").

4. Processing the Predictions

  • Identify the most confident prediction.
    Convert the model’s output coordinates into actual pixel positions on the image.

5. Returning the Result

  • If an object is detected, return its center coordinates (x, y).
  • If no object is found or confidence is too low, return null.

6. Memory Cleanup

  • Since TensorFlow.js runs on the GPU, we must free up memory by disposing of temporary data after use.

Step 6

Feature file.

Gherkin
 
Feature: Add Item to Cart

  Scenario Outline: User adds an item to the cart successfully
    Given I navigate to the homepage
    When I add the "<itemtype>" item to the cart
    Then I should see the item in the cart
    Examples:
    |itemtype|
    |Pliers  |


Step 7

Step definition.

1. addToCartLocators

  • We store multiple locators (CSS, text, XPath) in an array.
  • The test tries them in the order listed.

2. findElement

  • If none of the locators work, it uses the ML-based fallback to find coordinates.
  • The return value tells us whether we used AI fallback (usedAI: true) or a standard DOM element (usedAI: false).

3. Clicking the Element

  • If we get a real DOM handle, we call element.click().
  • If we get coordinates from the AI fallback, we call page.mouse.click(x, y).
JavaScript
 
// step_definitions/steps.js
const { Given, When, Then } = require('@cucumber/cucumber');
const { chromium } = require('playwright');
const path = require('path');
const { findElement } = require('../util/locatorHelper');

let browser, page;

Given('I navigate to the homepage', async function () {
  browser = await chromium.launch({ headless: true });
  page = await browser.newPage();
  await page.goto('https://practicesoftwaretesting.com/');
});

When('I add the {string} item to the cart', async function (itemName) {

    // Define multiple possible locators for the Add to Cart button
    const productSelectors = [
        `//img[@alt='${itemName}']`,
        `text=${itemName}`,
        `.product-card:has-text("${itemName}")`
      ];

      await page.waitForTimeout(10000);

      // Attempt to find the element using multiple locators, then AI fallback
      const screenshotPath = path.join(__dirname, 'page.png');
      const found = await findElement(page, screenshotPath, productSelectors, 'Select Product');
    
      if (!found) {
        throw new Error('Failed to locate the Add to Cart button using all strategies and AI fallback.');
      }
    
      if (!found.usedAI) {
        // We have a DOM element handle
        await found.element.click();
      } else {
        // We have x/y coordinates from AI
        await page.mouse.click(found.element.x, found.element.y);
      }

  // Define multiple possible locators for the Add to Cart button
  const addToCartLocators = [
    'button.add-to-cart',            // CSS locator
    'text="Add to Cart"',            // Playwright text-based locator
    '//button[contains(text(),"Add")]', // XPath
  ];

  // Attempt to find the element using multiple locators, then AI fallback
  const screenshotPath1 = path.join(__dirname, 'page1.png');
  const found1 = await findElement(page, screenshotPath, addToCartLocators, 'Add to Cart');

  if (!found) {
    throw new Error('Failed to locate the Add to Cart button using all strategies and AI fallback.');
  }

  if (!found.usedAI) {
    // We have a DOM element handle
    await found.element.click();
  } else {
    // We have x/y coordinates from AI
    await page.mouse.click(found.element.x, found.element.y);
  }
});

Then('I should see the item in the cart', async function () {
  // Wait for cart item count to appear or update
  await page.waitForSelector('.cart-items-count', { timeout: 5000 });
  const countText = await page.$eval('.cart-items-count', el => el.textContent.trim());
  if (parseInt(countText, 10) <= 0) {
    throw new Error('Item was not added to the cart.');
  }
  console.log('Item successfully added to the cart.');
  await browser.close();
});


Using TensorFlow.js for self-healing tests involves:

  1. Multiple locators. Attempt standard locators (CSS, XPath, text-based).
  2. Screenshot + ML inference. If standard locators fail, take a screenshot, load it into your TF.js model, and run object detection (or a custom approach) to find the desired UI element.
  3. Click by coordinates. Convert the predicted bounding box into pixel coordinates and instruct Playwright to click at that location.

Conclusion

This approach provides a robust fallback that can adapt to UI changes if your ML model is trained to recognize the visual cues of your target elements. As your UI evolves, you can retrain the model or add new examples to improve detection accuracy, thereby continuously “healing” your tests without needing to hardcode new selectors.

AI JavaScript Test automation Cucumber (software)

Opinions expressed by DZone contributors are their own.

Related

  • Modern Test Automation With AI (LLM) and Playwright MCP
  • AI-Driven Test Automation Techniques for Multimodal Systems
  • From Zero to Meme Hero: How I Built an AI-Powered Meme Generator in React
  • Ditch Your Local Setup: Develop Apps in the Cloud With Project IDX

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!