Before joining Optimizely, I spent six years at Google working on the Search Quality Team. Experimentation was a core tenet of our development process: every change to Google search results was A/B tested on a small fraction of users before launching to everyone. During my time there we made thousands of improvements to Google's search results, each one scientifically tested, with measured impact on our key performance metrics.
Experimentation provided a way for us to answer questions like these quickly and easily. Not all our experiments involved ranking algorithms - simply changing things like the spacing between search results could have a significant impact.
My first project at Google was helping to build our internal platform that was used to run live experiments on our search results page. Having a centralized platform was essential to support hundreds of experiments running at one time across a large team. The platform also provided a common set of key performance metrics that were an input to every launch decision. Each week, at the Search Quality meeting, we'd look at experiment results from live traffic before making a decision on what would launch to users.
One of the core tenets of our experimentation practice was that experiments were run server-side. We experimented by changing code on Google servers rather than modifying content inside the end user's browser or device. Running experiments server-side was essential in our case as we were experimenting with algorithms that ran on our servers. But more importantly, running experiments server-side meant that they were deeply ingrained into our product development process.
In this post, I'll talk about the benefits of experimenting server-side, and some best practices we've learned from talking to product development teams.
Should You Run Experiments Client-Side or Server-Side?
The diagram below illustrates the basic difference between a client-side and server-side experimentation architecture. Client-side experimentation is done by experimenting directly in the end user's web browser, mobile app, or other client device, i.e. the treatment is determined as the content is being rendered to the user. Server-side experimentation is done by experimenting in code on company servers, i.e. treatments are determined before content is returned to the client.
But while the client-side approach continues to gain rapid adoption, it's also become increasingly clear that it is not, and never can be, a universal solution for experimentation.
Product development teams building new features, such as the algorithms that power Google's search results or the backend logic of a retail or travel site, are unable to experiment with those features client-side. More and more websites are using single page app frameworks with server-side rendering (i.e. isomorphic applications) making client-side implementations more difficult. As digital consumer channels proliferate, many companies are consolidating their technology stack with a unified data layer for multiple channels. For these reasons and more, server-side experimentation has been in increased demand and has been one of the top requested capabilities from Optimizely customers.
Client-side and server-side experimentation both offer distinct advantages depending on the needs of your organization. Below is a summary of some of the key differences we've learned talking to customers:
- If you're a marketing or growth team and want to enable anyone on your team to create experiments with a visual editor, without the need for code releases, you should consider running experiments client-side.
- If you're a product development team and want to experiment deeply in your product, with minimal performance impact, and across multiple channels, you should run experiments server-side.
Of course, many companies have a need for both. Most of the more sophisticated companies we talk to are doing a combination of client-side and server-side experimentation, involving both marketing or growth teams as well as product development teams.
We've also noticed a big difference in the process for teams running experiments on the client vs. those running experiments on the server. Consider the lifecycle of an experiment, from initial hypothesis to launch:
The client-side approach has a clear benefit that experiments don't need to be built on the server up front. If the experiment is a loser, as the majority of experiments are, then no server code release is required. As such, experimentation can be performed as a standalone practice that doesn't require a development team. Many companies we talk to running a client-side program at scale have a dedicated team for experimentation with a roadmap of experiments to run.
The server-side approach calls for a different organizational model. Each experiment may require a code release and therefore requires the involvement of the product development team every step of the way. Rather than having a centralized experimentation roadmap, each product development team should simply consider rolling out every feature on their roadmap as an A/B test. In the more experienced organizations we talk to, every member of the development team incorporates experimentation into their rollout process, rather than running experiments in isolation. This is the practice we had established on the Google Search Quality Team and was essential to ensure that every change to our search results was having a positive impact on users.
If you're on a product development team asking yourself, "what should we experiment on?" then the answer is simple: your product roadmap is your experimentation roadmap.