A/B testing has become a cornerstone of modern datadriven growth. By randomly exposing users to two or more variants of a feature or experience, we can measure the impact on key metrics and make informed product decisions. But not all A/B tests are created equal.
The traditional approach, known as frequentist A/B testing, relies solely on data collected from the current experiment to determine a statistically significant winner. Bayesian A/B testing, on the other hand, takes a more nuanced approach by incorporating prior knowledge and beliefs to make better decisions faster. In this post, we‘ll dive into the key concepts behind Bayesian A/B testing, walk through an example, and discuss when and how to use it to supercharge your experiments.
Frequentist vs Bayesian A/B Testing
In a typical frequentist A/B test, you choose a single metric to optimize (e.g. clickthrough rate), determine the minimum sample size needed to reach a desired level of statistical significance, then wait for the test to run to completion. Only the data collected in the experiment is used to determine if there is a statistically significant difference between the variants and declare a winner.
Bayesian A/B testing flips the script by starting with a prior probability distribution – our initial beliefs about how the variants will perform based on historical data, domain expertise, or educated guesses. As the experiment runs and new data is collected, we use Bayes‘ Theorem to update our beliefs and arrive at a posterior probability distribution. This allows us to quantify our uncertainty, seamlessly combine data from multiple experiments, and even optimize for multiple metrics at once.
Key Concepts in Bayesian A/B Testing
To understand how Bayesian A/B testing works under the hood, we need to grasp a few key concepts:

Prior Probability Distribution: This is the probability distribution that represents our beliefs about a metric (e.g. conversion rate) before the experiment. It encodes our prior knowledge and assumptions.

Likelihood: The probability of observing the data we collected in the experiment, given a certain value of the metric we‘re measuring. It‘s what allows us to update our prior beliefs with new information.

Posterior Probability Distribution: After updating our prior beliefs with the likelihood of the observed data, we arrive at the posterior distribution. This represents our new, refined beliefs about the metric based on the combination of prior knowledge and new evidence.

Expected Loss: To determine the winning variant in a Bayesian A/B test, we calculate the expected loss for each variant – essentially how much we expect to "lose" (in terms of the metric we‘re optimizing) if we choose that variant. The variant with the lowest expected loss is the winner.

Early Stopping: One of the key benefits of Bayesian A/B testing is the ability to stop tests early if we have enough evidence to make a confident decision. By quantifying our uncertainty and calculating expected loss in realtime, we can pull the plug on underperforming variants and redirect traffic to winners faster.
A StepbyStep Bayesian A/B Test Example
Let‘s walk through a simplified example to see Bayesian A/B testing in action. Suppose we want to test two variants of a landing page to see which one results in a higher conversion rate. Here‘s how we‘d set up and analyze the test:

Define a prior probability distribution: Based on past tests of similar pages, we believe the conversion rate is likely between 1020%, so we set our prior to be a normal distribution with a mean of 15% and a standard deviation of 2.5%.

Collect data: We run the test for a while and observe 100 conversions out of 1,000 visitors for Variant A (10% conversion rate) and 150 conversions out of 1,000 visitors for Variant B (15% conversion rate).

Update to a posterior distribution: Using Bayes rule, we combine the prior distribution with the observed data (likelihood function) to arrive at a posterior distribution for each variant. The posterior will be a combination of our prior beliefs and the new evidence, weighted by the strength of each.

Calculate expected loss: To determine a winner, we calculate the expected loss for each variant – essentially the average difference in conversion rate if we choose that variant. With a 5% difference in observed conversion rates, Variant B will likely have a lower expected loss and be declared the winner.

Make a decision: Based on the expected loss calculations, we can confidently declare Variant B the winner and roll it out to all users. We can also use the posterior distribution as the prior for future tests, refining our beliefs over time.
This is a simplified example, but it illustrates the key steps in a Bayesian A/B test. In practice, there are many more details to consider, such as the choice of prior distribution, the metric(s) to optimize, and when to stop the test early.
When to Use Bayesian vs Frequentist A/B Testing
So when should you use Bayesian A/B testing vs the traditional frequentist approach? There are pros and cons to each method:
Bayesian A/B testing is particularly useful when you:
 Have strong prior beliefs based on historical data or domain expertise
 Want to optimize for multiple metrics simultaneously
 Need to run many tests and make decisions quickly
 Care about quantifying uncertainty and risk
On the flip side, Bayesian A/B testing can be more complex to implement and interpret. The choice of prior distribution can have a big impact on the results, so it requires careful thought and justification.
Frequentist A/B testing is generally simpler and may be sufficient when you:
 Have little or no prior knowledge to incorporate
 Are running oneoff tests and optimizing for a single metric
 Have the luxury of time to let tests run to completion
 Don‘t need to quantify uncertainty or combine results across tests
Ultimately, the best approach depends on your specific situation and goals. Many organizations use a mix of frequentist and Bayesian methods in their experimentation programs.
Tools for Implementing Bayesian A/B Tests
If you‘re sold on the benefits of Bayesian A/B testing but not sure where to start, fear not – there are many tools available to help you implement it:

Experimentation platforms: Many popular A/B testing tools, such as Optimizely, VWO, and Google Optimize now offer Bayesian or "multiarmed bandit" capabilities to automate the process of updating beliefs and allocating traffic.

Statistical software: For a more handson approach, you can use statistical programming languages like R or Python to build your own Bayesian models. Packages like PyMC3 and RStan provide powerful tools for specifying priors, building likelihood functions, and sampling from posterior distributions.

Visualization tools: To communicate results and make decisions, it‘s crucial to visualize your Bayesian A/B test results. Plotting libraries like Matplotlib, ggplot2, and D3.js can help you create intuitive visualizations of prior and posterior distributions, expected loss, and more.
As Bayesian methods become more popular in the A/B testing world, an ecosystem of specialized tools is emerging to streamline the workflow. Some examples include:
 Amplitude‘s Experiment for Bayesian A/B testing in web and mobile apps
 Dynamic Yield‘s multiarmed bandits tool for optimizing website layouts
 Bayesian Optimization‘s cloud platform for optimizing ML models and A/B tests
These are just a few examples – there are many more tools available depending on your tech stack and testing needs. The key is finding a toolset that allows you to easily incorporate prior beliefs, update probabilities in realtime, and make datadriven decisions.
Best Practices for Effective Bayesian A/B Testing
Running effective Bayesian A/B tests requires a bit more upfront planning and thought than traditional frequentist tests. Here are some best practices to keep in mind:

Choose informative priors: The prior distribution can have a big impact on your results, so it‘s important to choose it carefully. Use historical data and domain knowledge to set realistic priors, and consider running sensitivity analyses to see how different priors affect the outcomes.

Determine optimal sample sizes: While Bayesian tests can often reach a conclusion faster than frequentist tests, they still require a sufficient sample size to update beliefs. Use power analysis or simulation to determine the minimum sample needed for conclusive results.

Monitor tests in realtime: One of the key benefits of Bayesian testing is the ability to monitor results and make decisions in realtime. Keep a close eye on the posterior probabilities and expected loss throughout the test, and don‘t be afraid to stop early if a clear winner emerges.

Analyze and interpret results: Interpreting Bayesian A/B test results requires a bit more nuance than frequentist tests. Focus on the posterior probabilities and expected loss rather than pvalues, and use visualizations to communicate uncertainty and risk.

Document and share learnings: A/B testing is all about continuous learning and iteration. Document your Bayesian test results, insights, and decisions in a central knowledge base, and share them with your team to inform future experiments.
By following these best practices and continuously iterating, you can harness the power of Bayesian A/B testing to make smarter, faster decisions and drive longterm growth.
Conclusion
A/B testing is a powerful tool for any datadriven organization, but not all A/B tests are created equal. Bayesian A/B testing offers a more flexible, efficient, and informative approach by incorporating prior beliefs and updating probabilities in realtime.
While Bayesian methods can be more complex than traditional frequentist approaches, the benefits are clear: faster decisionmaking, the ability to optimize for multiple metrics, and a quantified understanding of uncertainty and risk. With the right tools and best practices, any team can harness the power of Bayesian A/B testing to make better product decisions.
Of course, A/B testing is just one piece of the growth puzzle. The most successful organizations are those that foster a culture of experimentation, where everyone is encouraged to ask questions, test hypotheses, and learn from failures. By embracing Bayesian A/B testing and other datadriven methods, we can shift from gut feelings to confident, informed decision making.
So what are you waiting for? Go forth and experiment! With a Bayesian mindset and the right tools in your arsenal, you‘ll be well on your way to unlocking your company‘s full growth potential.