A Beginner’s Guide to Automated Cross-Browser Compatibility Testing
This article demonstrates why any new application or feature release needs to be tested for cross-browser compatibility.
Join the DZone community and get the full member experience.Join For Free
How many digital devices have you used until now? Probably a lot, right?
How many of the below-listed browsers have you used to date?
- UC Browser
Maybe all of them.
What would your answer be if I asked you to recall the most-frequently-used web apps or services on these devices and browsers? In my case, it would be Linkedin. The reason I remember this one app is because it worked flawlessly on all of the devices or browsers I’ve ever used.
If you wonder why I started with these questions, let me tell you that each of them is integral to understanding the importance of cross-browser compatibility testing. As promised, I will demonstrate why any new application or feature release needs to be tested for cross-browser compatibility in the sections below.
The Need for Automated Cross-Browser Compatibility Testing
As per StatCounter, in Asia, 82% of mobile users are on Android, 17% on iOS, and less than 1% on other operating systems (OS). While in the USA, 61% are on iOS and just 38% on Android. It would almost unsettle you to know that in America, 10.5% of Windows users are still on Windows 7. More than 3% use Windows 8.1, and some are still using Vista. The screen-resolution stats are equally diverse - 360x640 px, 1366x768 px and 1920x1080 px are the default/preferred resolutions on almost 30% of the devices around the world.
Worldwide, Android dominates the OS market, followed by Windows, iOS, and OSX. Similarly, the browser market is dominated by Chrome followed by Safari, Firefox, Samsung Internet, Edge, Opera, and UC Browser.
The Most Common Cross-Browser Compatibility Issues
Modern web applications are built using sophisticated CSS, HTML5, JS, and several other frameworks. Unfortunately, not every browser upgrades itself to support all the latest features deployed by developers. Even if they do, it might take a lot of time. This gap often leads to a lot of issues. Here are some of the most common cross-browser compatibility issues which necessitate the need for testing
- All browsers or particular versions do not support some HTML5 tags.
- Complex CSS, AJAX, JS, Flex features are either not supported or behave differently.
- Not all image formats are supported by all browsers (PNG issues on older IE versions are popular.)
- Not all media formats are supported (Flash issues on iPhone and Blackberry.)
- Device constraints. For example, screen reader apps of people with disabilities not supporting website content.
Should a business ignore a significant chunk of customers and serve them with broken UI/UX? Absolutely not. With such a diverse distribution of digital devices and user preferences, businesses need to figure out their priorities and strategize with the following questions in mind-
- How can they design, develop, and deploy robust applications for all their users in a cost-effective way?
- How do they ensure that an application works well for all users?
- How do they deliver a seamless user experience across devices, OSs, browsers, and screen-resolutions that their customers use?
The answer to all these questions is Automated cross-browser compatibility testing!
Automation Testing Enabling Agile Software Development
Traditionally, software applications weren’t that complicated, and somehow manual cross-browser testing was sufficient to test new builds and feature releases. But now that globalization is at its peak, new markets are opening up left and right. Thus, businesses need to operate in an agile way.
We're into a customer-centric economy right now. Customers are digitally-savvy and ever-connected. Your competitors are vying for ways to lure, convert, and retain your customers. As per McKinsey, with every single poor customer experience, businesses risk 25% customer attrition. You can be anything but slow at releasing new features in this cut-throat world, solving customer problems, and delivering a frictionless user-experience. Cross-browser compatibility testing is the key here, and Selenium is no less than a magic wand for automation test developers!
Automation Testing Over Manual Cross-Browser Testing
Manual testing has its benefits, but it's tortoise-slow, cost-inefficient, and resource-intensive. Automated cross-browser testing is one of the critical steps to help you achieve the required business agility. After all, agile digital businesses need agile development practices complemented with an agile testing culture. This article aims to elucidate every necessary high-level concept for beginners venturing into the world of automated cross-browser compatibility testing. Let's start with the basics.
What Is Cross-Browser Testing?
In a nutshell, cross-browser compatibility testing is an approach to check, verify and iteratively enhance your web applications to work equally for all your users irrespective of the devices, operating systems, browsers, or versions.
If your customers are spread across Mac Air, Mac Pro, Dell Inspiron or Lenovo Yoga, Edge, Safari, Chrome, and their different versions, you need to make sure your application works on all of them. For big applications like Facebook, with billions of users, it should ideally work on all possible permutations and combinations of "browsers X versions X operating system X screen resolution X."
Cross-Browser Testing Goals
For small-scale applications, the following question-framework might help in figuring out the right cross-browser testing goals:
- What browsers do your users and prospective customers use?
- Is your cross-browser testing plan incorporating older versions and pre-released versions of those browsers?
- Have you targeted the latest released devices (laptops, tablets, smartTVs, mobile phones) with modern capabilities?
- Is your application designed for people with disabilities too? Does your cross-browser testing handle it to ensure a hiccup-free experience for people who utilize assistive technologies?
- What are the best cross-browser testing hacks for a faster release?
Before moving ahead, it’s appropriate to brush up some cross-browser lingo here -
Working Cross-Browser - The application works fine with acceptable UX, core functionalities, and features across-browsers, OSs, and devices.
Across An Acceptable Number Of Web Browsers - It's practically impossible for a complex application to render all the browsers in the world with all functionalities. 100% is challenging, but it's possible to estimate which browsers your users are on and defensively code to get it working on those acceptable numbers of browsers.
Manual vs Automated Cross-Browser Testing
Cross-browser testing mainly falls under three categories - Exploratory, Visual, and Functional testing. How a website functions can be tested in an automated fashion, but how it feels visually can only be tested manually. It's not a good idea either to manually test entire applications. Manual testing is not just time and resource-intensive but also makes organizations slow to roll out new app features. So, what's the way out?
What About A Hybrid Testing Strategy?
Modern cross-browser compatibility testing engineers are well equipped to write scripts for performing almost all the manual testing tasks in an automated fashion using Selenium and any language of their preference - Java, Python, C#, JS, PHP, etc. A Hybrid approach has a few obvious benefits:
- Websites can be audited using automated cross-browser compatibility testing to see if all the HTML, JS, and CSS are rendering without errors/warnings across-browsers.
- For future references and cross-verification, automation test engineers may also automate the process of taking screenshots of web elements and storing them either locally or on the cloud.
- To ensure inclusivity and aesthetic appearance, testing executives may revisit these screenshots and cross-check if the website renders well for people with disabilities. Some calculated manual tests could be part of the strategy.
Benefits of Automated Cross-Browser Compatibility Testing
There are countless benefits of implementing a proper strategy for automated cross-browser compatibility testing for all your web applications or websites. Let’s dive deeper into the most crucial advantages-
Seamless User Experience, Build's Compatibility, and Quality Assurance
The direct benefit is delivering a flawlessly addictive user experience to the end-customers. This translates into sustained business, lower customer attrition, and optimized brainpower utilization in scaling the business. Automated cross-browser testing enables organizations to stay lean and innovate. Brainpower can be put to developing exceptional solutions rather than debugging broken user interfaces across-browsers.
Mitigate Risks Early at Speed
QA teams have reported automation saves them 70% of their time. With testing-automation, usability or functionality bugs can be identified comparatively much faster than manual testing. This empowers testers to evaluate the risk impact or exposure to the market, prioritize risks, and accordingly fix them.
Shorten Release Cycle and Ship Code Faster
With the invention of new development approaches, the testing environment is also improving. Continuous testing is the latest de-facto in the application testing community. Automation tools like Selenium can be well integrated into the DevOps automation strategy. Testing is a crucial stage of CI/CD pipeline implementation. With continuous automated testing, development teams can ship code faster and shorten the release cycle for new features.
Cost Efficiency, Time Efficiency, Test coverage, and Accuracy
With managed automation testing solutions, testing teams
- need not worry about the gruesomely tedious process of setting up the infrastructure with the right set of version dependencies and instead,
- can focus on writing robust scripts in their preferred language(s) - Ruby, Python, Java, PHP, etc.,
In-fact, cloud testing automation services like LambdaTest successfully enable companies to avoid high costs associated with establishing testing infrastructure. Plus, you can also do the following:
- scale to 2000+ browsers or devices.
- shrink your in-house testing infrastructure and use a cloud-based solution on a need basis.
Such a varying combination of browsers, devices, OS, and screen-resolutions provide better test coverage for your application. Automated cross-browser compatibility testing naturally helps avoid human-errors, thus availing better accuracy too.
How Do You Perform Effective Cross-Browser Compatibility Testing at Scale?
Ideally, cross-browser testing can be carried in four phases.
This phase involves thoroughly investigating your application's target market, the required usability features, and devising a testing roadmap to ensure your application's successful development and deployment.
For instance, if you're building a B2B product for the Indian market where many people are still using IE. And your product has a WebGL powered 3D animation featured in it (generally, it can be found in modern eCommerce sites or digital healthcare solutions). Then, you need to count-in the factor that it won't be supported on IE versions older than IE-11. In that scenario, you need to make your application render without this feature on browsers released before IE11. The rest of the application should do its work without breaking UI/UX.
In this phase, granular breakdown of your application into separate components and further into distinct functionalities can help you devise independent code solutions for features that might not render well on different browsers or devices. This can't be overlooked as it will be exposed to post-testing. So, better to embrace this strategy at the development stage. Also, developers need to accept that some features won't be rendered at all on some devices because of hardware constraints and require an alternative.
The karma stage for testing engineers. This is where automation test engineers would test the new functionalities of every new build or feature implementation. The right approach is to
- First, test the application on stable common browsers like Safari, Chrome, and Firefox and verify there is no bug hindering the application rendering.
- Next, they may also try lo-fi accessibility testing to cross verify the application works with minimal assistive tech (say, only with keyboard or screen reader).
- Once the application passes the above stages, the cross-browser testing team can sniff cross-browser issues for older or redundant browser versions and weed them out.
- Test on all possible combinations of target user devices and browsers you identified in the ideation stage. If you can afford physical devices, then great. Else, try on emulators or virtual machines (VMs).
- Finally, you can use automated testing tools like Selenium Grid or maybe commercial services like LambdaTest to improvise further the cross-browser compatibility testing's accuracy, speed, and browser environment coverage.
Cross-Browser Compatibility Testing Checklist
- HTML, xHTML, jQUERY, CSS, JS, AJAX Validation
- SSL certificate validation
- HTML Character Encoding and Date Formats
- Responsive Application - Rendering In Different Screen Resolutions
- Layout consistency across-browsers - Font-styles, color rendering, navigation, client-side form validation, zooming features
- Functional - Animations and effects, links, plugins, scripts
Continuous Iterative Testing and Development
Identify the root cause behind the bugs discovered in the cross-browser testing phase, if any, then narrow down the exact browser, machine, resolution, and versions where the bug is persistent. Try testing on similar browsers or devices or configurations to evaluate the scale of bugs. Report the concerned development team.
Note: The bug need not always be from your side. Sometimes it is even from the browser or the device vendor side. So, analyze properly and raise requests to the concerned teams.
Challenges Associated With Cross-Browser Testing
It is time to look into the common challenges associated with cross-browser testing.
As mentioned earlier, ideation and planning are crucial for cross-browser testing, and poor planning can pose serious testing challenges for the organization. Some features are just not going to work on all browsers. Some browsers and devices might not provide the desired level of performance. Under such circumstances, it could be challenging for testing teams to perform cross-browser testing independently as it would require constant communication and collaboration with the development and business team. A quick solution could be to use a real-time cloud-powered testing environment where different application stakeholders can collaborate in real-time. This would help in moving things faster.
Infrastructure and Scalability Challenges
A large and complex application with a diverse and distributed user base must be tested on several devices, platforms, and browsers. The numbers may immediately shoot up to multiple hundreds. This can often incur a hefty dent in company finances in terms of device, environment, and human resources. Again, leveraging clout automation testing solutions could ridiculously cut-costs and optimize resource utilization.
Browser Version and Device Update Challenge
Intending to dominate the browser and device market, companies and vendors keep releasing new versions, with security patches and enhanced features. All major browsers use different rendering engines. For example, Chrome uses Blink, Mozilla uses Gecko, Safari uses WebKit, Edge uses EdgeHTML. All of these engines process CSS, HTML in their ways; some adhere to the W3 consortium guidelines, some overlook it. All this increases the task overheads for testers for ensuring cross-browser compliance.
Training Challenges for Ninja Automation Test Engineers
Automation at first sounds very easy but automating complex features can be very daunting and demands an advanced level of skills. As a tester, you need to continually keep yourself upgraded with the latest automation testing skills (esp in Selenium). Also, developers need to stay abreast of developments in the browser world. Enterprises with testing requirements need to invest heavily in their ninja automation testers to upskill them.
Recommended Tools and Cloud-Based Services For Cross-Browser Compatibility Testing
Most popular open-source frameworks for web and mobile testing:
- Commercial Cloud-based cross-browser Testing Apps
Modern cloud-based cross-browser testing solution enablers like LamdbaTest come equipped with advanced features like AI-powered testing, team collaboration, enhanced visibility, parallel testing, automated testing, on-demand scaling, layout screenshot capturing, and testing session recordings.
For reporting browser bugs:
To test pre-releases of upcoming browser versions:
Cross-Browser Testing Demo
In this drag and drop testing demo, we shall use LambdaTest Free Version for automation testing in the cloud. We will use Python to drag and drop an HTML element on “jqueryui.com”. We will test this feature in both Chrome and Firefox but this can be scaled to any number of browser, browser versions, OS, and screen-resolutions.
The test code we have saved in lambdatest_crossbrowser.py within a virtual environment.
Here’s the script:
Following lines import webdriver and actionchains class from Selenium library.
This imports time module, an inbuilt python package we use to pause program execution for the specified number of seconds.
Username, access token, and gridUrl can be found on LambdaTest’s automation dashboard; these are LambdaTest user authentication configurations.
Here we make a directory of different browsers we want to cross-test our application on. We’ve kept things simple but you can have as many combos of browsers and versions, operating systems, and resolutions as you wish. You may even populate these dynamically.
The key-value pairs of “browser” and “version” specify the target browser and its specific version in which you want to test your app. We’ve kept Chrome v71.0 and Firefox v71.0 as two browsers. Add as many dict items as you need.
Next, we loop over the browser dict to cross-test features on Chrome and Firefox.
for cap in browsers:
The desired cap is a dict where we specify test environment configuration for lamdatest automation testing selenium grid. “Platform” specifies target operating system, browser name, and version declares target browser and its version, resolution describes the target machine resolution, name and build are for cloud-test-nomenclature i.e., for your reference.
Url will be our command_executor that specifies the location of our remote server.
url = "https://"+username+":"+accessToken+"@"+gridUrl
To create a browser driver instance, we write:
To maximize the browser window, we execute:
To fetch the URL in the remote browser for testing drag features, we call get(‘URL’) API of the webDriver.
We observed the website has drag-drop functionality within iFrames and so we need to explicitly switch to frame 0 to interact with the HTML elements in that iFrame.
The following code locates the draggable HTML element:
source1 = driver.find_element_by_id('draggable')
Below the line of code creates an alias of Actionchains method. Actionchains class in selenium enables hardware interactivity with the browser element.
action = ActionChains(driver)
Below lines create a chain of actions i.e., click on the draggable element, hold it for a while, move the source to a different location, and finally release it. Next, it prints a message, just for reference. In actual implementation, you can click a screenshot or call an API for triggering an email or some sort of notification and inform the concerned authority about the successful/failed operation.
To pause program execution for a while, call the ‘sleep’ method of the ‘time’ package. This helps in visually observing things if you test on the local system.
Next, we fetch another URL in the browser to test both the “drag as well as drop” feature:
Like earlier, this code switches to frame 0 of the iFrame where we need to interact with draggable and droppable elements -
Following lines get the draggable HTML element location and the droppable HTML element location. We will drag the source1 (draggable) element to source2 (droppable).
We create a chain of actions like before to drag and this time we are dropping to a particular element.
This time when we drop the source element to the target location, its text changes to “Dropped!”. We cross verify it and print a message with the browser under test.
To quit the launched browser instance:
To execute this run the following command in the terminal:
On successful execution, you would see the following screen on the LambdaTest dashboard:
You would see the following screen on the terminal:
As you can see, the test was first to run on Chrome and then on Firefox. You would have observed, we kept the browser versions the same but you can have any supported combo of browser, versions, OS, and resolutions. LambdaTest provides a capabilities generator to help you specify the desired browser and OS capabilities with ease.
As testers, you shoulder the organizational responsibilities of ensuring your applications' smooth working across devices and browsers. Cross-browser testing skills is one of the most effective weapons in a tester's arsenal. Overcome the strategic and infrastructural challenges associated with cross-browser testing, use automation to strengthen your testing muscles, and be that supporting pillar behind your application's success.
Leave a comment below if you are curious about certain aspects of cross-browser testing.
Opinions expressed by DZone contributors are their own.