A/B Testing is Dead, Adaptive Testing is What‘s Next

A/B testing has long been the gold standard for data-driven marketers looking to optimize digital experiences. By comparing two versions of an asset and measuring which one performs better, it allows you to make evidence-based decisions rather than relying on guesswork and intuition alone.

A/B testing has an impressive track record of results. Perhaps most famously, Barack Obama‘s 2008 presidential campaign raised an additional $60 million by A/B testing their donation page and email sign-up forms. Even a simple test of button copy ("Learn More" vs "Join Us Now") led to a 40.6% increase in sign-ups.

For over a decade, A/B testing has been a core part of the digital marketer‘s toolkit. But in recent years, its limitations have become increasingly clear. A/B testing is time-consuming, resource-intensive, and forces a large portion of visitors to have a suboptimal experience. As customer expectations rise and attention spans shrink, the tradeoffs of traditional A/B testing are getting harder to justify.

Fortunately, advances in artificial intelligence are enabling a new approach called adaptive testing. Powered by machine learning, adaptive testing automates test allocation and allows marketers to test more variations faster while reducing the risk of a poor customer experience. Tools like HubSpot‘s Adaptive Pages have made this capability accessible even to marketers without data science expertise.

In this post, we‘ll walk you through the evolution of testing from A/B to adaptive. We‘ll explain the technology behind adaptive testing, show examples of it in action, and share best practices for getting started. By the end, you‘ll understand why adaptive testing is a must-have capability for modern marketing teams.

The Limitations of Traditional A/B Testing

Before we dive into adaptive testing, let‘s take a closer look at where A/B testing falls short. While it‘s certainly a powerful tactic, it has some significant drawbacks:

  1. It‘s resource-intensive. Setting up an A/B test requires designing and coding two versions of an asset. For non-technical marketers, this often means looping in developers or designers, which can slow things down. The more variations you want to test, the more work is required upfront.

  2. It takes a long time to get results. To reach statistical significance, A/B tests need to run for a substantial period of time and gather a large sample size. Depending on your traffic volume, this could be days, weeks, or even months. And if the test is inconclusive, you have to start all over again.

  3. It delivers a poor experience to many visitors. In a traditional A/B test, traffic is split 50/50 between the two variations for the entire length of the test. That means that even if Variation B is performing terribly, half of your audience will continue to experience it while you gather data. This can hurt your brand and your bottom line.

  4. It‘s prone to human error and bias. Deciding when to call a test and which variation won requires statistical know-how that many marketers lack. This leads to ending tests prematurely based on incomplete data. There‘s also the risk of confirmation bias, where marketers let their initial hypothesis influence how they interpret results.

  5. It optimizes locally rather than globally. A/B testing is focused on making isolated improvements to individual pages or assets. But it doesn‘t take into account the overall customer journey or optimize for long-term metrics like customer lifetime value. You could have a high-converting landing page but still be delivering a disjointed experience.

These limitations aren‘t just hypothetical. HubSpot surveyed over 500 marketers and found some eye-opening statistics on the state of A/B testing:

  • Only 17% of marketers are satisfied with their A/B testing capabilities and results
  • 60% say the development time required to set up tests is their biggest challenge
  • 54% struggle to get a large enough sample size to make tests conclusive
  • 40% admit to making decisions based on incomplete or inconclusive data

The bottom line is that while A/B testing can deliver good results, it comes with a lot of challenges in speed, resources, and data quality. As customer expectations continue to rise and competition intensifies, marketers need a better way to test and optimize. Enter adaptive testing.

How Adaptive Testing Works

Adaptive testing aims to solve the limitations of A/B testing through the power of machine learning. Rather than a simple either/or test between two variations, adaptive testing uses an approach called the multi-armed bandit to dynamically allocate traffic to variations that are performing well while phasing out underperformers.

The term "multi-armed bandit" comes from a classic probability puzzle. Imagine a gambler at a row of slot machines, each with a different payout rate. The gambler wants to find the machine with the highest return, but also maximize their winnings. With each play, they must balance exploiting the machine that has the best track record so far while still exploring the other options in case they are better.

This same concept can be applied to marketing tests. Each variation is like a slot machine with an unknown payout rate (conversion rate). The goal is to identify the top performer as quickly as possible while also maximizing total conversions.

Here‘s a simplified step-by-step of how it works:

  1. The marketer creates multiple variations of a page or asset. This could be 2 variations or 20.
  2. The test is launched and traffic is initially split evenly between variations.
  3. As data comes in, the multi-armed bandit algorithm starts to identify which variations are performing above or below average.
  4. Traffic starts to be shifted towards the high performers and away from the low performers. The size of the shift depends on the relative difference in performance.
  5. Over time, the best variation receives the lion‘s share of traffic, while the underperformers are phased out. But a small percentage of traffic may still go to the lesser variations in case one of them pulls ahead.
  6. The marketer can see results in real-time and even terminate variations that are substantially underperforming. But there is no need to manually choose a winner as the algorithm is optimizing towards the globally optimal solution.

The key difference from A/B testing is that traffic allocation is dynamic, automated, and happens continuously throughout the test. There‘s no need to wait for an arbitrary sample size to be reached. And visitors are more likely to see the best performing variation rather than being forced into an even split.

HubSpot co-founder Dharmesh Shah shared details on how this works in HubSpot‘s platform:

"We took state-of-the-art AI and machine learning models called multi-armed bandits to power our adaptive testing. This approach automatically allocates more traffic to page variants that are performing well, and less to those that aren‘t. There‘s no need for the user to manually check results and pick a winner.

In early tests, we‘re seeing sites using adaptive pages get an average 45% lift in conversion rates compared to a control. It‘s a huge improvement from traditional testing where you often see minimal results after weeks of waiting."

Examples of Adaptive Testing in Action

Now that you understand the concept and potential of adaptive testing, let‘s look at some real-world examples of how HubSpot customers are using it to get better results.

Measuring Video Engagement

Wistia, a video hosting platform, wanted to test different video variations on their homepage to see which one led to the most signups. Using HubSpot‘s Adaptive Pages, they created four versions that varied the video content and call-to-action.

The test ran for two weeks and Variation D quickly emerged as the winner with a 27% higher conversion rate compared to the original control. Interestingly, Variation B started off strong in the first few days before being beaten by D, something that would have been missed in a traditional A/B test.

By the end of the test, over 80% of traffic was being automatically routed to Variation D, ensuring the maximum number of visitors saw the most engaging video.

Optimizing Landing Page Design

BuiltWith, a website profiler tool, tested three very different landing page designs to promote a new chrome extension. The variations included different hero images, calls-to-action, and social proof.

Using multi-armed bandit testing, BuiltWith was able to quickly identify a clear winner. Variation C, which featured a clean hero image and concise copy, converted 38% better than the original.

What‘s more, they discovered a variation that was significantly underperforming and were able to quickly kill it off. In an A/B test, it would have continued to receive half of the traffic and dragged results down.

Personalizing Based on Ad Groups

iSpot.tv, a TV ad measurement platform, used adaptive pages to align their PPC landing pages with different ad groups. They created 5 variations, each emphasizing a different value proposition that mirrored the ad copy that brought visitors there.

Not only did this ensure message match and coherence from ad to landing page, but it also allowed them to uncover which value propositions resonated most with their audience. The winning variation focused on iSpot‘s ability to track competitor ads, suggesting that competitive intelligence was a key selling point.

After only 20 days of testing, the personalized variations were converting 28% better on average than the generic page. And some ad groups were seeing up to a 50% lift.

Improving Mobile Conversion Rates

Trella Health, a healthcare data platform, knew that mobile traffic to their site was growing but noticed conversion rates on mobile were much lower than desktop. They hypothesized that their bulky, text-heavy landing page was not mobile-friendly.

Using adaptive testing, they pitted their original desktop-optimized page against a streamlined variation designed with mobile in mind. It featured pared down copy, larger buttons, and a simplified lead form.

Within a week, the mobile-friendly variation was converting 20% better than the control. What‘s more, it was actually outperforming the original on desktop as well. The simplified design provided a better experience across devices.

These examples show the power and speed of adaptive testing in uncovering both big wins and underperforming variations. In each case, adaptive outperformed what a traditional test would have found.

Getting Started with Adaptive Testing

Hopefully these examples have convinced you of the value of adaptive testing. But how do you actually get started? Here are some best practices to keep in mind:

1. Have a Clear Hypothesis

While adaptive testing automates the process of picking a winner, it still requires human input in deciding what to test. Before creating any variations, clearly articulate your hypothesis and goals. What are you trying to optimize for and why? How will you measure success?

2. Test Across the Entire Funnel

Don‘t just focus on high-traffic landing pages. Test key touchpoints across the entire customer journey from homepage to checkout. Even small improvements in micro-conversions can compound into big results.

3. Embrace Radical Redesigns

One of the benefits of adaptive testing is that you can try bold, creative variations without fear of tanking your conversion rates. Think beyond button colors and test entirely new designs, copy, and experiences. Just be sure each variation aligns with your hypothesis.

4. Look Beyond Conversion Rates

While conversions are the ultimate goal, don‘t overlook secondary metrics like engagement, time on page, and scroll depth. These can provide valuable insights into how visitors are interacting with your variations and where improvements can still be made.

5. Integrate with Other Tools

Adaptive testing shouldn‘t happen in a silo. Integrate your testing tool with your analytics, CRM, and other marketing platforms to get a holistic view of performance and carry insights through to other channels.

6. Adopt an Always-On Mindset

Adaptive testing is not a one-and-done tactic. Commit to always having a test running and continuously iterating based on results. As your audience and market evolve, so should your optimization strategy.

The AI-Powered Future of Marketing

Adaptive testing is just the beginning of how artificial intelligence is transforming marketing. As machine learning models get more sophisticated, more and more optimization tasks will become automated.

We‘re already seeing AI-powered tools for things like:

  • Personalized content recommendations – Platforms like Uberflip use natural language processing to match visitors with the most relevant content based on their behavior and interests.

  • Predictive lead scoring – Tools like Infer use machine learning to analyze thousands of demographic and firmographic data points to predict which leads are most likely to convert.

  • Chatbot copywriting – Companies like Persado use deep learning to generate high-converting chatbot scripts and email subject lines.

  • Programmatic ad buying – Platforms like Albert optimize bids and placements across channels in real-time based on performance data.

In a recent survey by Salesforce, 84% of marketers said they expect AI to revolutionize their industry in the next 5 years. But adapting won‘t just be a matter of buying the latest AI tools. It will require a fundamental shift in how marketers approach their work.

The most successful marketers will be those who view AI as a collaborator rather than a competitor. They will embrace automation for repetitive tasks like reporting, data analysis, and optimization. This will free them up to focus on higher level strategy, creativity, and innovation – things that machines can‘t replicate.

Adaptive testing is the perfect example of this human-machine symbiosis. Marketers still use their creativity and intuition to come up with test ideas and designs. But they then let the machine take over to find the optimal solution and deliver the best experience to visitors. It‘s a glimpse into the future of marketing where AI enhances rather than replaces human intelligence.

Conclusion

If you‘ve made it this far, you‘re now well versed in the evolution from A/B to adaptive testing and the benefits it provides. To recap:

  • A/B testing is a powerful optimization tactic but has significant limitations in speed, statistical rigor, and user experience.
  • Adaptive testing overcomes these limitations by using machine learning to dynamically allocate traffic and identify winners faster.
  • Real-world examples show how adaptive testing drives double-digit improvements in conversion rates while reducing the risk of bad variations.
  • Getting started requires having a clear hypothesis, embracing radical redesigns, and adopting an always-on testing mindset.
  • Adaptive testing is part of a larger shift towards AI-powered marketing automation, which will transform the day-to-day role of marketers.

Whether you‘re a grizzled optimization veteran or just getting started with testing, adaptive offers a way to get better results with less manual effort. So what are you waiting for? Go set up your first adaptive test and experience the power of machine learning firsthand.

Just remember, while AI can optimize for conversions, it takes a human marketer to create experiences worth converting on. Happy testing!