4 Ways to Combat Android Fragmentation in QA
With more than 24,000 distinct Android smartphones on the market, how can developers test to ensure quality on every user's device?
Join the DZone community and get the full member experience.Join For Free
There are more than 24,000 different Android smartphones on the market today. For developers, this is a major headache.
iPads and iPhones come in various sizes and models, but they have a sole distributor. With Android, on the other hand, each manufacturer tweaks the OS slightly to fit with different offerings that differentiate their brand, such as auditory or motion sensors or a curved screen.
Android fragmentation poses a challenge for developers and for QA teams: How can they deliver quality on all of their users’ devices?
Android was built to encourage device diversity and innovation. Google likes to remind developers that without its widely available, customizable OS, manufacturers would each have to make their own OS, slowing hardware innovation and increasing costs in the process.
Android Device Coverage: The Conventional Wisdom
Android smartphones represented 85% of global mobile phone sales in Q3 of 2015 and 88% of sales in Q3 of 2016. With an ever growing market share, Android entices more and more developers to expand their apps beyond iOS. With upwards of 24,000 devices, Android developers are forced to make tough decisions about testing coverage. Clearly, not every device and OS version can be tested. Even so, not every device that is tested can be tested for every use case. It’s just too time-consuming.
Teams will often test the most popular 80% or 90% of devices owned by their users, that way they get peace of mind without testing everything on the market. If your team has struggled to identify what those devices should be, or has wondered if a higher or lower percentage is more fitting, I’m going to get into four points to consider for any device coverage strategy. But first, a word of caution…
The Problem With Ignoring Fragmentation
Developers can be tempted to ignore Android fragmentation at any time. QA consultant Joe Schultz details working with a large bank who had chosen to test on just two Android devices when rolling out an update to its mobile app. They went with just one Samsung model and one Motorola. Sounds a little crazy and a little risky, but even large developers often test on under 10 devices.
The bank was flooded with poor ratings that impacted their original app rating. After upping the test coverage to 200 hardware and OS combinations, the bank was able to identify improvements, roll them out in a subsequent upgrade and eventually get their app store ratings back to original standings (and halt the complaints received via social media). Teams regularly have to decide how much flack they’re willing to risk in case there are major issues with the devices they’ve left untouched.
Combatting Android Fragmentation During QA
Because the stakes are so high, teams have to be incredibly strategic when choosing which device and device-OS version combinations they will cover. You have to cater to your own user base, keep an eye on potential growth, and balance risk tolerance with deadlines. Here are four key ways to confidently narrow your Android app testing plan.
Analyzing Your Own Data for OS Version, Device Usage, and Phone Versus Tablet
When strategizing device coverage for an existing application, nothing is more important than your own usage analytics. Here’s the data you should be collecting and analyzing:
Operating system: do your users typically upgrade to the newest version of Android, or do you need to support past versions? If so, how far back?
Devices and brands: which devices are commonly used? Are your users on the most current versions? If there are multiple versions of one device, how similar or different are those versions and can they be combined?
Phone versus tablet: If your app has been designed for use on a phone and on a tablet, what is the breakdown of use? How much of the device coverage should be allocated to phones versus tablets?
When you know exactly how many users you risk by excluding KitKat but covering LolliPop and Marshmallow, you can feel more confident in accepting the risk and narrowing your testing combinations (thus reducing overwhelm).
Making choices based on geographical distribution & market data
Teams will also find it useful to analyze general market data to give insight into the growth of their product and the type of user who might be adopting it in the near future.
This can really come into play with Android OS versions and device manufacturers. Your users may differ slightly from your market, meaning you don’t want to exclude new popular phones or old OSes that your future users might rely on.
Depending on the markets they serve, some developers may need to test on low-end devices that are not officially compatible with Android and yet run on it anyways. This could require inside knowledge about devices whose popularity isn’t well tracked.
Covering Manufacturers of Specific Hardware
For some teams, testing the right hardware isn’t just about the brand of the phone, but the manufacturer of units inside the phone. Gaming company Pocket Gems makes sure to cover high and low-resolution devices and to test on devices using each of the five major graphical processing units (GPUs). So, depending on the nature of the app, device coverage may include underlying capabilities rather than just brands and their popular offerings.
Using Combinatorial Testing and Sampling to Narrow Test Cases
With a better understanding of device, OS version, and hardware needs of their user base and overall market, teams still have to reach a level of confidence in how they combine these requirements together with test cases. The combinations could result in hundreds of thousands of unique test cases if not kept in check. That would be bad math. But good math—namely analysis, sampling, and combinatorial calculations—allows for the reduction of test cases.
Luckily, there are combinatorial tools that can do the math for us. QA engineers can plug in parameters like device type, communication network, software and IoT device connections to produce statistically-derived combinations. Randomization and sampling can also help lessen QA overwhelm when it comes to combining devices, OSes, and test cases. Once combinations are identified, the automation of repeatable scripts and the use of device emulators can help make coverage more realistic, though manual testing is still a requisite as those strategies can’t fully mimic real user behavior.
Every QA team needs to cover the most critical user journeys in the most important devices. Fragmented platforms like Android don’t make this easy, but you can still deliver an amazing customer experience — all it takes is a little strategy.
Published at DZone with permission of Dayana Stockdale, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.