A/B testing is a term used in Business Intelligence and marketing which implies a random experiment that is carried out with two variants A and B. A is said to be the control and B is the treatment. Control is usually the current state or the version of the system and B is the modification that is proposed. The base of the test is a hypothesis which is examined iteratively until the results are positive. When a change, based on some hypothesis is proposed to an existing system it might not generate value or returns in terms of profit. This does not necessarily mean that the hypothesis is wrong. For instance, if a certain section of an app is not able to convert the visitors into customers, there can be several reasons for it.
Earlier A/B testing was used only for testing websites, Google has introduced A/B testing for Android apps too. The developers will now be able to include different texts and graphics alternatives for the app listing. Google will present it to some users and will report the performance of each alternatives to the developer. Depending on the report, the developer can chooser between the alternatives that performed the best. This would help in fine tuning of the app by the developer and enhance the number of downloads.
The process of choosing the alternative starts with the analysis of the existing version. Based on the analysis, a hypothesis is established which is considered to be the reason for under-performance of the app. This hypothesis is itself based on data that is the probable reasons for drawbacks in A which is the current version of the system. There can be an infinite number of changes and modifications that can be suggested based on the hypothesis. Whichever iteration of B, i.e. the proposed modification is the most successful, is accepted and implemented.
How the term A/B testing came into being?
A/B testing was first implemented by Google Data scientists when they analyzed the optimum number of results that must be displayed on the search result page. This kind of test was carried out by marketers before, but it became popular only when the term, A/B testing was coined. A/B testing is now being widely implemented in the development of UI and UX to choose the design which ensure best returns.
Also called Split testing, A/B testing in the context of Android apps makes use of three different versions of graphics and texts that is used for app listing. The users are presented with these versions of the same app and the one which shows better conversion rate wins and becomes the default icon. However, this process is not completed in just one step. The similar step is conducted several times and is called iterations. The iteration that gives the best result is always chosen. As there are two versions to compare, A and B, this type of testing came to known as A/B testing.
Google has implemented A/B testing for Android apps
A/B testing for Android apps has been recently announced at Google I/O. One issue that many Android apps face is the low ratio of conversion. Users visit the app’s page on the app store but bounce back. A/B testing can be helpful in assessing the probable cause of high bounce rate and in optimizing the app listing to ensure better conversions and sales. This form of testing has gained much prominence as it is scientific and even the marginal drop in bounce rate can cause a significant rise in sales.
The test is all about iterations and you must not hesitate to follow-up the same hypothesis with as many iterations as possible. Repeating the process with modifications like changing the color of the icon, font and the color of the text and similar things can help. However, you must be extra careful in establishing the hypothesis. A hypothesis must be backed by ample amount of valid data. If the data is accurate and its analysis has been performed by both humans and programs, the hypothesis will most likely be correct.
A hypothesis must not be given up until you have tried different iterations of the modification, B against the current version of the system, A. The test is akin to hit and trial except for the fact that the trials are based on a hypothesis which in turn, is backed by valid data and facts.
How effective is A/B testing?
When performed with a clear hypothesis and valid iterations of modifications, A/B testing is highly effective. There are times when a modification is tested and discarded because it did not produce results. Had the same modification be implemented in a different way, the results could have been positive.
Reasons that support A/B testing
- The existing traffic to the app can be tapped to maximum advantage by A/B testing by using a system that results in better conversion rates.
- It is highly cost effective as the resource spent on increasing your conversion by A/B testing is lesser than the money pumped into generating paid traffic.
- The ROI generated by A/B testing can be massive as small changes in the app listing can cause a heavy surge in conversion rates and thus sales and revenues.
- As the test is based on a data-backed hypothesis, the chance of mistake is minimized.
- Different versions can be presented to the users at the same time which makes the comparison easy and more accurate.
A/B testing which was earlier used mostly on e-commerce portals and websites has now been introduced for Android apps as well. This form of testing was found to be highly effective for the developers. In the pilot programs conducted by Google, it was observed that developers were able to get a double digit increase in the revenue by implementing A/B testing on Android apps. As of now, this type of testing is available only for the graphics and text used for the app in the app listing but can be extended to other sections of the Android app as well.