The Power of Docker and Cucumber in Automation Testing
Learn how Docker and Cucumber streamline automation testing by enabling parallel execution, reducing costs, and improving reliability in a simple setup.
Join the DZone community and get the full member experience.
Join For FreeAutomation testing is a must for almost every software development team. But when the automation suite consists of many scenarios, the running time of automation suites tends to increase a lot, and sometimes, rather than helping a team to reduce the turnaround time of testing, it doesn’t help in a much-expected way. Thus, there is a need for parallelization of the automation suite. With parallelization comes another difficult thing. Running the automation suite parallelly is not much cheaper. It requires a bigger infrastructure to run the suite.
With all these things, we still have one solution that comes to mind: to reduce the cost and the running time of the automation suite, i.e., utilizing the docker technology, which will act as a different architecture but comes with a much cheaper or almost no cost. So, in today’s article, we will discuss how to achieve our goal of reducing the total turnaround time of the testing team with the help of automation testing utilizing technologies like Docker and Cucumber.
Why Cucumber?
There are a lot of automation frameworks available in the market today to be chosen, but in all of those, Cucumber stands out as one of the best. It allows automation tests to be written from a business-oriented perspective, concealing complex coding logic and presenting the tests as human-readable sentences. This approach ensures that even individuals without automation development expertise can easily understand the tests and grasp the expected outcomes of these tests.
Combining Docker and Cucumber
Docker is a containerization technology that offers a closed environment for applications to operate. Among its many benefits, reliability stands out as a key advantage. By functioning as an isolated environment, Docker ensures that the application can access all the necessary artifacts within the closed environment, making the application more efficient and reliable.
In automation suites, very often we see the flaky execution of the automation scripts. With Docker, the automation scripts run in a closed environment that helps to provide consistency and reliability to the automation run and helps the testing team with consistent automation runs.
The combination of Docker and Cucumber makes a good automation suite, both from the reliability and ease of understanding point of view. On top of that, Docker allows us to run the automation suite in parallel to reduce the automation suite running time.
Here are some of the benefits of using Docker and Cucumber together:
- Docker offers parallel execution of its containers, allowing the automation suite to run concurrently, thereby reducing its total execution time.
- Cucumber facilitates the use of a human-readable language, thereby enhancing the clarity and comprehension of test scenario workflow.
- Docker offers scalability and flexibility, enabling the developers to efficiently manage extensive automation suites.
- Docker and Cucumber streamline the long-term maintenance of automation suites, allowing for seamless updates to dependencies and scenarios.
The Architecture of the Cucumber and Docker-Based Automation Framework
In this article, we will use the WDIO framework as the base automation framework, with the entire suite’s architecture built around it. To facilitate parallel execution in our automation suite, a Selenium Grid-like architecture can easily be developed using Docker.
Here are some of the key aspects of the automation suite architecture:
1. Docker Selenium Grid Architecture
The Docker Selenium Grid architecture can easily be formed utilizing the Docker images provided by Selenium. We can use the official Selenium Grid Hub and the browser node’s docker images to form the Selenium Grid Architecture.
2. Docker Image of Automation Suite
To utilize the efficiency and reliability of docker technology in our automation suite, an image of our automation suite code can be created. This docker image of our automation code can be created by creating a Docker file that holds the information about the code and a base Node.js image, on top of which the automation suite image will be built.
3. Docker Compose File
Docker-compose file is one of the best features available to handle the containerization architecture. The Docker compose file is responsible for all the internal networking and the required ports, environment variables, and volume mounting.
This file can be used to scale up or down the required containers of any specific service, i.e., this file will be responsible for orchestrating the actual Selenium Grid architecture and allowing us to run the multiple instances of our automation code to connect with the different browser nodes available, just like a real Selenium Grid architecture.
How to Create a Docker Image of the Automation Suite
A docker image is nothing but a YAML file template that tells the machine a set of instructions. A docker image contains multiple steps to form a bundle of the whole code that fulfills a specific task. So, creating a docker image of an automation suite is not a tedious task. All the docker images require a base image on top of which the whole image would be built. In our use case, we will be using a Node.js alpine image as a base image. The alpine images are a lightweight form of an actual image that reduces the image size and makes the image build and run process very fast and memory efficient.
Here’s a sample of the automation codes docker image:
FROM node:20-alpine
# Install Python 3 and update PATH
RUN apk add --no-cache python3
# Set the Python 3 binary as the default python binary
RUN ln -s /usr/bin/python3 /usr/local/bin/python
# Add Python 3 binary location to the PATH environment variable
ENV PATH="/usr/local/bin:${PATH}"
# Install build tools
RUN apk add --no-cache make g++
WORKDIR /cucmber-salad
ADD . /cucmber-salad
# Install all the required libraries using npm install
RUN apk add openjdk8 curl jq && npm install
# Use the Feature Name as Environment Variable
ENV FEATURE=**/*.feature
ENV ENVIRONMENT=staging
ENV TAG=@Regression
ENV CHROME_VERSION=109.0.5414.74
ENV HOST=***.***.**.***
Docker-Compose File Setup
The docker-compose file is the main configuration file. It represents the overall architecture of the setup and helps organize the whole suite. A traditional docker-compose file consists of different services that connect and work together to form a network.
In our case, the Selenium Hub, Browser Nodes, and the automation codes image reside in the docker-compose file. These images are responsible for constructing the Selenium Grid architecture and running the automation suite in parallel.
A sample of the docker-compose file:
version: "3"
services:
hub1:
image: seleniarm/hub:latest
ports:
- "4442:4442"
- "4443:4443"
- "5554:4444"
chrome1:
image: seleniarm/node-chromium:latest
shm_size: '1gb'
depends_on:
- hub1
environment:
- SE_EVENT_BUS_HOST=hub1
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- HUB_HOST=hub1
- SE_NODE_MAX_SESSIONS=2
- VNC_NO_PASSWORD=1
With this docker-compose file, we can configure the Selenium grid architecture in one simple command :
docker-compose up-d --scale chrome1=2 --scale chrome2=2 hub1 chrome1 hub2 chrome2
This command runs the two hubs and chrome nodes, and these chrome nodes can accommodate one automation image for each. The Scale flag enables multiple instances of a docker-compose service.
Conclusion
By leveraging Docker, Cucumber significantly reduces the execution time of the automation suite, thereby reducing the overall turnaround time during the testing phase of the software development cycle.
With this setup, our automation suite running instances and Grid architecture looks like this:
Opinions expressed by DZone contributors are their own.
Comments