A/B Testing: Reporting
A/B Testing: Reporting
Join the DZone community and get the full member experience.Join For Free
xMatters delivers integration-driven collaboration that relays data between systems, while engaging the right people to proactively resolve issues. Read the Monitoring in a Connected Enterprise whitepaper and learn about 3 tools for resolving incidents quickly.
A few months ago I wrote about my initial experiences with A/B testing and since then we’ve been working on another one and learnt some things around reporting on these types of tests that I thought was interesting.
Reporting as a first class concern
One thing we changed from our previous test after a suggestion by Mike was to start treating the reporting of data related to the test as a first class citizen.
To do this we created an end point which the main application could send POST requests to in order to record page views and various other information about users.
On our previous test we’d derived the various conversion rates from our main transactional data store but it was really slow and painful because the way we structure data in there is optimised for a completely different use case.
Having just the data we want to report on in a separate data store has massively reduced the time spent generating reports.
However, one thing that we learnt about this approach is that you need to spend some time thinking about what data is going to be needed up front.
If you don’t then it will have to be added later on and the reporting on that metric won’t cover the whole test duration.
Drilling down to get insight
In the first test we ran we only really looked at conversion at quite a high level which is good for getting an overview but doesn’t give much insight into what’s going on.
For this test we started off with higher level metrics but a few days in became curious about what was going on between two of the pages and so created a report that segmented users based on an action they’d taken on the first page.
This allowed us to rule out a theory about a change in conversion which we had initially thought was down to a chance we’d made but actually proved to be because of a change in an external factor.
The frustrating part of drilling down into the data is that you don’t really know what is it you’re going to want to zoom in on so you have to write code for the specific scenario each time!
We generate browser specific metrics on each test that we run and while the conversion rate is generally similar between them there have been some times when there’s a big drop in one browser.
Published at DZone with permission of Mark Needham , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.