Skyrocket Your Cross-Browser Testing With Minimal Effort
Skyrocket Your Cross-Browser Testing With Minimal Effort
In order to get the best out of cross-browser testing, it is essential that you follow these rules.
Join the DZone community and get the full member experience.Join For Free
Sensu is an open source monitoring event pipeline. Try it today.
One thing that is evident with developers is their preferences for IDEs, operating systems, browsers, etc. If you take the case of web developers, a majority of them have an affinity towards certain types of browsers, due to that preference they prefer cross-browser testing their source code only on ‘browsers of their choice.' After testing, the functionalities programmed by the web developer may work fine on specific browsers, but the situation, in the real world, is completely different. The users of your web app or website might come from different parts of the world and may have a different preference towards browsers (or browser versions); some customer may even prefer to use completely outdated browsers, which are having a minuscule market share in the browser market.
How can you and your team deal with such kind of a situation? It is not possible to test the functionalities on all ‘existing browsers’ running on the various OS, and it is not recommended to verify the code on a subset of browsers.
Based on the target market, your development and marketing teams will have the breakup of the browsers being used by the users in the market, along with details about device types and operating systems. Hence, the ideal approach is to test on all the popular browsers available in the market, along with the browsers that are widely used by the audience in the target market. In the first scenario where you verify the functionalities on all the available browsers (immaterial of their usage) would require ‘infinite resources,' whereas the second approach is more calculated since cross-browser compatibility check is performed on browsers that are being used by the majority of your user-base. The cross-browser testing process should be an iterative process, i.e. testing on the ‘shortlisted browsers, operating systems, devices, etc.’ should be performed before the changes are pushed to the production server. This process should be repeated before every new release.
Cross-Browser Compatibility Testing — No Replacement for Manual Testing
It is a known fact that before any developer pushes the code (either to the development environment or staging environment before migrating to the production environment), they would be performing unit testing on the code changes that they have made. For unit testing, developers have a variety of unit-testing frameworks to choose from. JUnit and Jasmine are the most popular unit testing frameworks. Other types of tests performed at a module/package level are functional tests and visual regression tests. Cucumber is a popular choice for Behavior-Driven Development (BDD) or functional testing, and a visual screenshot comparison tool named Wraith is preferred for performing visual regression testing.
Manual cross-browser testing cannot cover all the scenarios, and even if it does, you cannot deploy resources for tests that do not require much intelligence since the tests are mostly repetitive in nature. Many organizations following continuous integration and continuous delivery are now implementing automation testing for validating cross-browser compatible scenarios that are mostly linear in approach and test results are then logged in a report. However, many user scenarios are non-linear/unpredictable as far as web app or websites are concerned. Automation tests might not be able to figure out whether an image is being displayed properly on a ‘particular category of Android device’ or not. Hence, automation testing or testing via AI Bots would be adapted to a larger extent if it could replicate user scenarios (or ad-hoc scenarios).
Though there are different types of testing – accessibility testing, usability testing, automation testing, regression testing, and so on, none of them are replaceable, and the same also applies for cross-browser testing, which is ideally used for ‘polishing’ the product so that it is usable on different types of browsers running across operating systems installed on various devices.
Primary Objectives of Cross-Browser Testing
When you think about creating a cross-browser testing matrix, the main intent of performing different types of tests is to unearth bugs, get them fixed, and make the product better across as many different devices and browsers or browser versions. Though everyone in your team might vouch for this intent, the purpose of cross-browser testing is different when it is looked from the lens of a developer and a tester. Though developers test their module to unearth bugs and fix them, their verification is limited to their module, whereas testers verify the product from an end-user point of view. Testers are normally elated when they come across a bug since it makes their efforts counted, but it is important to understand whether the test is performed keeping the ‘target audience’ in mind or it was done only to increase the ‘overall bug count.' It becomes important to outline the primary objective of cross-browser testing and highlight the corner scenarios and scenarios that can be ignored.
Based on the target audience of your web app, you can devise a cross-browser testing strategy that can cover major scenarios. The important question to be asked is "should your team work on making the product better by finding bugs on the bases of test cases performed on popular/latest browsers?" You have an option to ignore the bug/fix the bug and submit the code changes after re-testing the changes on the most preferred browsers/fix the bug, or you could submit the code changes after re-testing the changes on all the available browsers (including browsers that are not popular or rarely used by the users of your target market). There are pros and cons associated with each approach. You cannot simply ignore the bug, but you need to ‘prioritize fixing of the bug.' If the development team fixes the bugs and tests on all possible browsers, there might be a delay in shipping the product, and if they fix bugs on certain browsers, there is a possibility of shipping a defective product. This is a complex scenario, and this situation needs to be fixed before cross-browser testing is performed.
How do you plan your cross-browser testing activity and how can you expedite the activity? Below are some of the pointers that can be used to plan and execute the cross-browser testing.
Identification of Browser, OS, and Device Combinations
The first and foremost step of development is identifying the requirements of the customer. Along with the requirements, you should also have a deep level of understanding about the customer and market segmentation. Market segmentation would include understanding the traits about the consumer, preferable browsers in that particular market, the category of devices (phones/tablets) being used by consumers, etc.
Once you have identified these requirements, it becomes important that you test your web app/website thoroughly against those browsers, devices, and operating system configurations. Companies that develop browsers (Chrome, Firefox, Opera, etc.) also push fixes to the browsers; hence, your development and test team need to take a pragmatic approach to test and prioritize the browser versions on which testing should be performed after major fixes are done by the development team. Once the website goes live, you can get more details about the user preference using Google Analytics or other analytics engines that are integrated into your website. You can also use the cross-browser testing checklist for your website before it goes live.
Cross-browser testing your product on browser and device variations is synonymous to planning and executing an attack on the enemy on a battlefield. Just like a battlefield, where you have to plan the attack and use your arms/ammunition wisely, it is imperative to keep in mind your priorities with respect to the resources used for cross-browser compatible web development while testing the product for cross-browser compatibility. During the initial phase of testing, you should spend a minimal amount of time in order to test the product. The next level of attack is termed as ‘Raid’ where the attack becomes fiercer, and this is also next level in your testing activity. Testing would become more intense in this phase and the test team has to spend much more time in order to unearth bugs (which are relatively uneasy to find). After these two levels of testing, the test team needs to take the final march in order to defeat their enemies, i.e. bugs in the product. These bugs are mostly results of corner cases not being handled by developers or tests being performed on browsers that are used by much smaller user-base.
This grueling cross-browser testing process ensures that the product is relatively free of bugs and it is more robust in nature. Now that you have tested your product across the maximum number of browsers, the next and important step is to ‘narrow the context’ of your test and test the functionalities of your product across browsers that are preferred more by your target audience/market. This step is a crucial step in performing the ‘sanity testing’ of your product after which you can confidently claim that your product would work perfectly fine with X percent of the customer base.
Analytics and Insights Into the Customer’s Preferences
In the section titled 'Identification of Browser, OS, and Device Combinations,' we looked into the importance of understanding customer’s insights so that the development and test teams can spend their efforts in solving issues that are faced by your customers. There is no sense in solving a product issue for a browser that is ‘never/rarely used’ by your target audience.
Google Analytics, Kissmetrics, MixPanel, etc. are some of the popular web analytics tools that can be used to get the nitty-gritty about your customers. Your team can gather valuable insights about the customers, e.g. browser used for visiting your website, the operating system being used, the location of the customer, etc. Though you will get a lot of information from the analytics, you should make use of data that matters the most to your team. Your testing team can also take the support of web developers who can help you in prioritizing the information. In many cases, the operating system on which the browser is being used may not matter much, so it is recommended that their knowledge and understanding is utilized to focus on OS + Browser + Device combinations that are important for your product.
Test on Browsers That Matter the Most
Though popular browsers (Google Chrome, Mozilla Firefox, Safari, Opera, Yandex, Edge, and IE10+) are available for different operating systems and devices, it is rare that you come across ‘browser issues’ that are largely dependent on the operating system. Unless the browser code/design goes through a major overhaul, there is a minor difference between browser versions, e.g. Firefox 45, Firefox 46, etc. While gathering information about browser statistics to decide on your cross-browser testing needs, it makes sense to combine all the desktop versions, except Internet Explorer (IE) since the old versions of IE lack support for latest web technologies, like HTML5.
The argument discussed above also holds good for mobile devices and tablets. Mobile devices have been on the rise and can no longer be ignored. In fact, many organizations are working on web products with the mobile-first agenda. Along with the popular browsers, many devices have in-app browsers, like Samsung Internet in Samsung devices, the Mi Browser in Mi devices, etc. In order to gain market share, device manufacturers make their in-house developed native browsers as the default browser, and many times, these in-app browsers become more important from cross-browser testing perspective than other popular browsers as you are targeting the customer base of the entire mobile company itself.
Hence, based on the market being targeted and the priority of the medium (mobile, tablet, desktop, etc.), you should have an organized and prioritize testing conducted on browsers that are popular in your target market. GS StatsCounter can give you detailed information about the browser market share from a region and device perspective. For example, below is the snapshot of the mobile browser market share in Asia.
As seen from the figure above, the market share of Samsung browser is more when compared to other browsers on the Android platform and, hence, becomes critical that the web developers and test team spend the right amount of development and testing effort on browsers that matter the most (for the product).
Along with the details about browser usage statistics, you should also have a detailed view about the test priorities of the browsers. This would help in prioritizing the cross-browser testing and development effort. For example, if only a small percentage of your website visitors use Opera, testing on Opera should be a lower priority than other browsers, which are used by a majority number of users who use your web-app/website. However, it is crucial to include those low priority browsers in your testing checklist, as they may bring your power leads.
Now that we have touched upon some of the basic elements to make your cross-browser testing efficient, let’s look at the points in more detail.
1. Focus on Bugs That Are Not Browser-Dependent
Before the source code is pushed to the development environment/QA environment/production environment, you should make it perform a regression test on the browser of your choice or the browser that you use most frequently. The primary advantage of following this approach is that the testing would be performed on the latest browser (as by default, ‘auto-update’ is enabled in all the browsers), and the focus would be to unearth bugs and fix bugs that are browser agnostic. There might be cases that your website is not rendering correctly on certain browsers and devices, but the issue might not be related to the viewport or screen size of the device. It could be a bug that could be browser-agnostic and spending too much time on testing the functionality on a certain device/browser category could hamper the speed of the product development.
Some pointers to figure out browser-agnostic bugs are:
- Use browser developer debugs options to view the website for different viewports
- Try testing interactive use cases on the website
- Check whether the website works fine in scenarios where there is throttling of Internet speeds. You could even use the admin login of your ISP (Internet Service Provider) to limit the Internet speed. There are online tools (like webpagetest, etc.) that can also be used in order to test this scenario.
2. Achieve Maximum Testing Throughput by Focusing on the RIGHT Browsers
The approach mentioned in step (1) would be useful in unearthing bugs that are browser agnostic, but the cross-browser testing would be performed on one of the popular browsers that are installed on the developer’s machine. There is a possibility of a regression defect, i.e. a fix that was made for a particular browser might result in breakage of functionality on some other browser. In fact, fixing an issue that occurred on Chrome might result in a side-effect for Internet Explorer or any other browser. Another possibility is that a fix on one browser could fix the issue on all the other browsers.
When the test team reports a bug, they would also mention the browser on which the issue occurred, along with the scenario. Based on the bug-pattern, the development team can come up with a table that lists the browsers based on the Risk vs. Returns matrix where Risk indicates the impact of a bug-fix on cross-browser compatibility that one type of browser may have on other browsers in which the testing is performed. Browsers can be categorized in different levels (risk-1, risk-2, risk-3) where risk severity rises with each level, and Returns would indicate the impact of a bug-fix across different browsers.
You either have the option to test on the browsers in the risk-1–>risk-2 –>risk-3…risk–>nth order (ascending) or risk-nth –>….risk-3->risk-2->risk-1 (descending). In the first approach, the testing would be performed on popular browsers first; whereas, in the second approach, the testing would be performed on less popular browsers first. The approach-2 has better throughput — both in terms of development and testing — since bugs that are encountered (and fixed)
Less popular browsers would also result in fixing similar issues for popular browsers. CanIUse is one popular website that can help you in identifying the browsers on which your website might encounter a maximum number of issues with respect to the feature provided through your web elements. This finding should be used in conjunction with the data that we got from the section titled Identification of Browser, OS, and device combinations.
As far as cross-browser testing is concerned, verification of the functionalities on different browsers and different screen sizes (and viewports) is considered an ideal option. By doing so, you can maximize your testing efforts since your website has been through a grueling phase of cross-browser testing (on combinations of different kinds of browsers, devices, as well as an operating system).
3. Last-Mile Testing
By following the steps mentioned in (1) and (2), the majority of the bugs would have been unearthed and fixed as well. It is mandatory that a sanity test is performed on these set of browsers after each development and test cycle. Many mobile devices have native in-app browsers; some browsers follow a freemium business model (i.e. some browser features are free, some are premium), whereas some browsers are not updated regularly. Such kind of browsers are low-priority browsers since only a minuscule percentage of users might be using them; hence, you should test on these browsers only when you are done testing on all the other browsers.
Since the gains that you get by testing on these low priority browsers, sanity testing on these browsers should be done if your test resources have time.
4. Test Before You Go Live
It is always recommended that extensive cross-browser testing is performed before making your web app go live in the production environment. With the trend of Shift-Left testing, it becomes imperative to test early and test often. You can perform cross-browser testing of your locally hosted website or web-pages using an SSH(Secure Shell) tunnel connection hosted through LambdaTest cloud servers. This is essential to provide a quality user-experience since the first experience makes a lot of difference.
5. Take Care of Accessibility
In order to take care of the accessibility factor, you should answer the important question — can everyone use your website?’ Can a person with a hearing impairment, color blindness, motor impairment, or some other disability use your product? It is indispensable to have the product tested for cross-browser accessibility testing.
In order to get the best out of cross-browser testing, it is essential that you:
- Devise a cross-browser testing strategy
- Create a cross-browser compatibility matrix
- Prioritize browsers so that testing is performed on browsers that yield the best results
- Keep in mind accessibility as well as usability
- Keep in mind browser-dependent bugs
- Perform local testing of your web pages before you push them on the Internet
- Use a cloud-based, cross-browser testing tool like LambdaTest to cherish the maximum result with minimum efforts. These tools will provide you a hassle free VM setup through their library of legacy as well as latest browsers running across multiple Android, iOS, MacOS, or Windows devices.
Published at DZone with permission of Harshit Paul , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.