Conversion Sciences

The Ultimate A/B Testing Guide: Everything You Need, All In One Place

Welcome to the ultimate A/B testing guide from the original and most experienced Conversion Optimization Agency on the planet!

In this post, we’re going to cover everything you need to know about A/B testing (also referred to as “split” testing), from start to finish. Here’s what we’ll cover:

Table of Contents

By the end of this guide, you’ll have a thorough understanding of the entire AB testing process and a framework for diving deeper into any topic you wish to further explore.

In addition to this guide, we’ve put together an intuitive 9-part course taking you through the fundamentals of conversion rate optimization. Complete the course, and we’ll review your website for free!

No time to learn it all on your own? Check out our turn-key Conversion Rate Optimization Services and book a consultation to see how we can help you.

1. The Basic Components Of A/B Testing

AB testing, also referred to as “split” or “A/B/n” testing, is the process of testing multiple variations of a web page in order to identifying higher performing variations and improve the page’s conversion rate.

Over the last few years, AB testing has become “kind of a big deal”.

Online marketing tools have become more sophisticated and less expensive, making split testing a more accessible pursuit for small and mid-sized businesses. And with traffic becoming more expensive, the rate at which online businesses are able to convert incoming visitors is becoming more and more important.

The basic A/B testing process looks like this:

  • Make a hypothesis about one or two changes you think will improve the page’s conversion rate.
  • Create a variation or variations of that page with one change per variation.
  • Divide incoming traffic equally between each variation and the original page.
  • Run the test as long as it takes to acquire statistically significant findings.
  • If a page variation produces a statistically significant increase in page conversions, use it it replace the original page.

Have you ever heard the story of someone changing their button color from red to green and received a $5 million increase in sales that year?

As cool as that sounds, let’s be honest: it is not likely that either you or I will see this kind of a win anytime soon. That said, one button tweak did result in $300 million in new revenue for one business, so it is possible.

AB testing is a scientific way of finding out if your tweak that leads to a boost in conversions is actually significant, or just a random flux.

AB testing (AKA “split testing”) is the process of directing your traffic to two or more variations of a web page.

AB testing is pretty simple to understand:

A typical AB test uses AB testing software to divide traffic.

A typical AB test uses AB testing software to divide traffic.

Our testing software is the “Moses” that splits our traffic for us. Additionally, you can choose to experiment with more variations than an AB test. These tests are called A/B/n tests, where “n” represents any number of new variations.

The goal of AB testing is to measure if a variation results in the more conversions.

The goal of AB testing is to measure if a variation results in the more conversions.

So that could be an “A/B/C” test, an “A/B/C/D” test, and so on.

Here’s what an A/B/C test would look like:

The more variations we have in an AB test, the more we have to divide the traffic.

The more variations we have in an AB test, the more we have to divide the traffic.

Even though the same traffic is sent to the Control and each Variation, a different number of visitors will typically complete their task — buy, signup, subscribe, etc. This is because a many leave your site first.

We research our visitors to find out what might be making them leave before converting. These are our test hypotheses.

We research our visitors to find out what might be making them leave before converting. These are our test hypotheses.

The primary point of an AB test is to discover what issues cause visitors to leave. The issues above are common to ecommerce websites. In this case we might create additional variations:

  • One that adds a return policy to the page.
  • One that removes the registration requirement.
  • One that adds trust symbols to the site.

By split testing these changes, we see if we can get more of these visitors to finish their purchase, to convert.

How do we know which issues might be causing visitors leave? This is done by researching your visitors, looking at analytics data, and making educated guesses , which we at Conversion Sciences call “hypotheses”.

In this example, adding a return policy performed best. Removing the registration requirement performed worse than the Control.

In this example, adding a return policy performed best. Removing the registration requirement performed worse than the Control.

In the image above, the number of visitors that complete a transaction is shown. Based on this data, we would learn that adding a return policy and trust symbols would increase success over the Control or removing registration.

The page that added the return policy is our new Control. Our next test would very likely be to see what happens when we add trust symbols to this new Control. It is not unlikely that combining the two could actually reduce the conversion rate. So we test it.

Likewise, it is possible that removing the registration requirement would work well on the page with the return policy, our new Control. However, we may not test this combination.

With an AB test, we try each change on it’s own variation to isolate the specific issues and decide which combinations to test based on what we learn.

The goal of AB testing is to identify and verify changes that will increase a page’s overall conversion rate, whether those changes are minor or more involved.

I’m fond of saying that AB testing, or split testing, is the “Supreme Court” of data collection. An AB test gives us the most reliable information about a change to our site. It controls for a number of variables that can taint our data.

2. The Proven AB Testing Framework

Now that we have a feel for the tests themselves, we need to understand how these tests fit into the grand scheme of things.

There’s a reason we are able to get consistent results for our clients here at Conversion Sciences. It’s because we have a proven framework in place: a system that allows us to approach any website and methodically derive revenue-boosting insights.

Different businesses and agencies will have their own unique processes within this system, but any CRO agency worth it’s name will follow some variation of the following framework when conducting A/B testing.

AB Testing Framework Infographic

For a closer look at each of these nine steps, check out our in-depth breakdown here:  The Proven AB Testing Framework Used By CRO Professionals

3. The Critical Statistics Behind Split Testing

You don’t need to be a mathematician to run effective AB tests, but you do need a solid understanding of the statistics behind split testing.

An AB test is an example of statistical hypothesis testing , a process whereby a hypothesis is made about the relationship between two data sets and those data sets are then compared against each other to determine if there is a statistically significant relationship or not.

To put this in more practical terms, a prediction is made that Page Variation #B will perform better than Page Variation #A, and then data sets from both pages are observed and compared to determine if Page Variation #B is a statistically significant improvement over Page Variation #A.

That seems fairly straightforward, so where does it get complicated?

The complexities arrive in all the ways a given “sample” can inaccurately represent the overall “population”, and all the things we have to do to ensure that our sample can accurately represent the population.

Let’s define some terminology real quick.

Image showing a population of people and two samples with differing numbers of people. This difference is variance.

Population and Variance.

While it appears that one version is doing better than the other, the results overlap too much.

While it appears that one version is doing better than the other, the results overlap too much.

The “ population ” is the group we want information about. It’s the next 100,000 visitors in my previous example. When we’re testing a webpage, the true population is every future individual who will visit that page.

The “ sample ” is a small portion of the larger population. It’s the first 1,000 visitors we observe in my previous example.

In a perfect world, the sample would be 100% representative of the overall population.

For example:

Let’s say 10,000 out of those 100,000 visitors are going to ultimately convert into sales. Our true conversion rate would then be 10%.

In a tester’s perfect world, the mean  (average) conversion rate of any sample(s) we select from the population would always be identical to the population’s true conversion rate. In other words, if you selected a sample of 10 visitors, 1 of them (10%) would buy, and if you selected a sample of 100 visitors, then 10 would be buy.

But that’s not how things work in real life.

In real life, you might have only 2 out of the first 100 buy or you might have 20… or even zero. You could have a single purchase from Monday through Friday and then 30 on Saturday.

This variability across samples is expressed as a unit called the “ variance ”, which measures how far a random sample can differ from the true mean (average).

This variance across samples can derail our findings, which is why we have to employ statistically sound hypothesis testing in order get accurate results.

How AB Testing Eliminates Timing Issues

One alternative to AB testing is “serial” testing, or change-something-and-see-what-happens testing. I am a fan of serial testing, and you should make it a point to go and see how changes are affecting your revenue, subscriptions and lead.

There is a problem, however. If you make your change at the same time that a competitor starts an awesome promotion, you may see a drop in your conversion rates. You might blame your change when, in fact, the change in performance was an external market force .

AB testing controls for this.

In an AB test, the first visitor sees the original page, which we call the Control . This is the “A” in the term “AB test”.The next visitor sees a version of the page with the change that’s being tested. We call this a Treatment , or Variation . This is the “B” in the term AB test. We can also have a “C” and a “D” if we have enough traffic.

The next visitor sees the control and the next the treatment. This goes on until we enough people have seen each version to tell us which they like best. We call this statistical significance . Our software tracks these visitors across multiple visits and tells us which version of the page generated the most revenue or leads.

Since visitors come over the same time period, changes in the marketplace — like our competitor’s promotion — won’t affect our results. Both pages are served during the promotion, so there is no before-and-after error in the data.

Another way variance can express itself is in the way different types of traffic behave differently. Fortunately, you can eliminate this type of variance simply by segmenting traffic.

How Visitor Segmentation Controls For Variability

An AB test gathers data from real visitors and customers who are “voting” on our changes using their dollars, their contact information and their commitment to our offerings. If done correctly, the makeup of visitors should be the same for the control and each treatment.

This is important. Visitors that come to the site from an email may be more likely to convert to a customer. Visitors coming from organic search, however, may be early in their research, with not as many ready to buy.

If you sent email traffic to your control and search traffic to the treatment, it may appear that the control is a better implementation. In truth, it was the kind of traffic  or traffic segment that resulted in the different performance.

By segmenting types of traffic and testing them separately, you can easily control for this variation and get a much better understanding of visitor behavior.

Why Statistical Significance Is Important

One of the most important concepts to understand when discussing AB testing is statistical significance, which is ultimately all about using large enough sample sizes when testing. There are many places where you can acquire a more technical understanding of this concept, so I’m going to attempt to illustrate it instead in layman’s terms.

Imagine flipping a coin 50 times. While from a probability perspective, we know there is a 50% chance of any given flip landing on heads, that doesn’t mean we will get 25 heads and 25 tails after 50 flips. In reality, we will probably see something like 23 heads and 27 tails or 28 heads and 22 tails.

Our results won’t match the probability because there is an element of chance to any test – an element of randomness that must be accounted for. As we flip more times, we decrease the effect this chance will have on our end results. The point at which we have decreased this element of chance to a satisfactory level is our point of statistical significance.

In the same way, when running an AB tests on a web page, there is an element of chance involved. One variation might happen to receive more primed buyers than the other or perhaps an isolated group of visitors happen to have a negative association with an image used on one page. These chance factors will skew your results if your sample size isn’t large enough.

While it appears that one version is doing better than the other, the results overlap too much.

It’s important not to conclude an AB test until you have reach statistically significant results. Here’s a handy tool to check if your sample sizes are large enough.

For a closer look at the statistics behind A/B testing, check out this in-depth post:  AB Testing Statistics: An Intuitive Guide For Non-Mathematicians

4. How To Conduct Pre-Test Research

The definition of optimization boils down to understanding your visitors.

In order to succeed at A/B testing, we need to be creating variations that perform better for our visitors. In order to create those types of variations, we need to understand what visitors aren’t liking about our existing site and what they want instead.

Aka we need research.

Conversion Research Evidence with Klientboost Infographic

For a close look at each of these sections, check out our full writeup here:  AB Testing Research: Do Your Conversion Homework

5. How To Create An A/B Testing Strategy

Once we’ve done our homework and identified both problem areas and opportunities for improvement on our site, it’s time to develop a core testing strategy.

An A/B testing strategy is essentially a lens through which we will approach test creation. It helps us prioritize and focus our efforts in the most productive direction possible.

There are 7 primary testing strategies that we use here at Conversion Sciences.

  • Gum Trampoline
  • Completion Optimization
  • Flow Optimization
  • Minesweeper
  • Nuclear Option

Since there is little point in summarizing these, click here to read our breakdown of each strategy: The 7 Core Testing Strategies Essential To Optimization

6. “AB” & “Split” Testing Versus “Multivariate” Testing

While most marketers tend to use these terms interchangeably, there are a few differences to be aware of. While AB testing and split testing are the exact same thing, multivariate testing is slightly different.

AB and Split tests refer to tests that measure larger changes on a given page. For example, a company with a long-form landing page might AB test the page against a new short version to see how visitors respond. In another example, a business seeking to find the optimal squeeze page might design two pages around different lead magnets and compare them to see which converts best.

Multivariate testing, on the other hand, focuses on optimizing small, important elements of a webpage, like CTA copy, image placement, or button colors. Often, a multivariate test will test more than two options at a time to quickly identify outlying winners. For example, a company might run a multivariate test cycling 6 different button colors on its most important sales page. With high enough traffic, even a 0.5% increase in conversions can result in a significant revenue boost.

Multivariate testing example graphic

Multivariate testing works through all possible combinations.

While most websites can run meaningful split tests, multivariate tests are typically reserved for bigger sites, as they require a large amount traffic to produce statistically significant results.

For a more in-depth look at multivariate testing, click here:  Multivariate Testing: Promises and Pitfalls for High-Traffic Websites

7. How To Analyze Testing Results

After we’ve run our tests, it’s time to collect and analyze the results. My co-founder Joel Harvey explains how Conversion Sciences approaches post-test analysis below:

When you look at the results of an AB testing round, the first thing you need to look at is whether the test was a loser, a winner, or inconclusive. Verify that the winners were indeed winners. Look at all the core criteria: statistical significance, p-value, test length, delta size, etc. If it checks out, then the next step is to show it to 100% of traffic and look for that real-world conversion lift. In a perfect world you could just roll it out for 2 weeks and wait, but usually, you are jumping right into creating new hypotheses and running new tests, so you have to find a balance. Once we’ve identified the winners, it’s important to dive into segments . Mobile versus non-mobile Paid versus unpaid Different browsers and devices Different traffic channels New versus returning visitors (important to setup and integrate this beforehand) This is fairly easy to do with enterprise tools, but might require some more effort with less robust testing tools. It’s important to have a deep understanding of how tested pages performed with each segment. What’s the bounce rate? What’s the exit rate? Did we fundamentally change the way this segment is flowing through the funnel? We want to look at this data in full, but it’s also good to remove outliers falling outside two standard deviations of the mean and re-evaluate the data. It’s also important to pay attention to lead quality. The longer the lead cycle, the more difficult this is. In a perfect world, you can integrate the CRM, but in reality, this often doesn’t work very seamlessly.

For a more in-depth look at post test analysis, including insights from the CRO industry’s foremost experts, click here:  10 CRO Experts Explain How To Profitably Analyze AB Test Results

8. How AB Testing Tools Work

The tools that make AB testing possible provide an incredible amount of power. If we wanted, we could use these tools to make your website different for every visitor to your website. The reason we can do this is that these tools change your site in the visitors’ browsers.

When these tools are installed on your website, they send some code, called JavaScript along with the HTML that defines a page. As the page is rendered, this JavaScript changes it. It can do almost anything:

  • Change the headlines and text on the page.
  • Hide images or copy.
  • Move elements above the fold.
  • Change the site navigation.

Primary Functions of AB Testing Tools

AB testing software has the following primary functions.

Serve Different Webpages to Visitors

The first job of AB testing tools is to show different webpages to certain visitors. The person that designed your test will determine what gets showed.

An AB test will have a “control”, or the current page, and at least one “treatment”, or the page with some change. The design and development team will work together to create a different treatment. The JavaScript must be written to transform the control into the treatment.

It is important that the JavaScript work in on all devices and in all browsers used by the visitors to a site. This requires a committed QA effort.

Conversion Sciences maintains a library of devices of varying ages that allows us to test our JavaScript for all visitors.

Split Traffic Evenly

Once we have JavaScript to display one or more treatements, our AB testing software must determine which visitors see the control and which see the treatments.

Typically, every other user will get a different page. The first will see the control, the next will see the first treatment, the next will see the second treatment and the fourth will see the control. Around it goes until enough visitors have been tested to achieve statistical significance.

It is important that the number of visitors seeing each version is about the same size. The software tries to enforce this.

Measure Results

The AB testing software tracks results by monitoring goals. Goals can be any of a number of measurable things:

  • Products bought by each visitor and the amount paid
  • Subscriptions and signups completed by visitors
  • Forms completed by visitors
  • Documents downloaded by visitors

Almost anything can be measured, but the most important are business-building metrics such as purchases, subscriptions and leads generated.

The software remembers which test page was seen. It calculates the amount of revenue generated by those who saw the control, by those who saw treatment one, and so on.

At the end of the test, we can answer one very important question: which page generated the most revenue, subscriptions or leads? If one of the treatments wins, it becomes the new control.

And the process starts over.

Do Statistical Analysis

The tools are always calculating the confidence that a result will predict the future. We don’t trust any test that doesn’t have at least a 95% confidence level. This means that we are 95% confident that a new change will generate more revenue, subscriptions or leads.

Sometimes it’s hard to wait for statistical significance, but it’s important lest we make the wrong decision and start reducing the website’s conversion rate.

Report Results

Finally, the software communicates results to us. These come as graphs and statistics.

AB Testing Tools deliver data in the form of graphs and statistics.

AB Testing Tools deliver data in the form of graphs and statistics.

It’s easy to see that the treatment won this test, giving us an estimated 90.9% lift in revenue per visitor  with a 98% confidence.

This is a rather large win for this client.

Selecting The Right Tools

Of course, there are a lot of A/B testing tools out there, with new versions hitting the market every year. While there are certainly some industry favorites, the tools you select should come down to what your specific businesses requires.

In order to help make the selection process easier, we reached out to our network of CRO specialists and put together a list of the top-rated tools in the industry. We rely on these tools to perform for multi-million dollar clients and campaigns, and we are confident they will perform fo you as well.

Check out the full list of tools here:  The 20 Most Recommended AB Testing Tools By Leading CRO Experts

9. How To Build An A/B Testing Team

The members of a CRO team graphic

The members of a CRO team.

Conversion Sciences offers a complete turnkey team for testing. Every team that will use these tools must have competent people in the following roles, and we recommend you follow suit in building your own teams.

Data Analyst

The data analyst looks at the data being collected by analytics tools, user experience tools, and information collected by the website owners. From this she begins developing ideas, or hypotheses, for why a site doesn’t have a higher conversion rate.

The data analyst is responsible for designing tests that prove or disprove a hypothesis. Once the test is designed, she hands it off to the designer and developer for implementation.

The designer is responsible for designing new components for the site. These may be as simple as creating a button with a different call to action, to completely redesigning a landing page for conversion.

The designer must be experienced enough to carefully design the changes we are testing. We want to change the element we are testing and nothing else.

Our developers are very good at creating JavaScript that manipulates a page without breaking anything. They are experienced enough to write JavaScript that will run successfully on a variety of devices, operating systems and browsers.

The last thing we want to do is break a commercial website. This can result in lost revenue and invalidate our tests. A good quality assurance person checks the JavaScript and design work to ensure it works on all relevant devices, operating systems and browsers.

Getting Started on AB Testing

Conversion Sciences invites all businesses to work AB testing into their marketing mix. You can start by working with us and then move the effort in-house.

Get started with our 180-day Conversion Catalyst program, a process designed to get you started AND pay for itself with newly discovered revenue.

  • Recent Posts

Brian Massey

  • How to Pick a Conversion Optimization Consultant for your Online Business - April 18, 2024
  • Conversion Rate Optimization for SaaS Companies - March 25, 2024
  • SaaS Website Best Practices for Conversion Optimization - March 25, 2024

You might also like

Conversion Optimization Examples: Homepage Carousel vs None

I landed on your blog through Growthhackers.com and found out that you guys are doing great. I really loved this A/B test guide. Keep on generating such great content.

Thanks for the kind words!

The portion regarding calculating sample size is incomplete.

You suggest to navigate to this link to calculate if results are significant: https://vwo.com/tools/ab-test-siginficance-calculator/

This calculator is fine to test if the desired confidence interval is met; however, it doesn’t consider whether the correct sample size was evaluated to determine if the results are statistically significant.

For example, let’s use the following inputs:

Number of visitors – Control = 1000, Variant = 1000 Number of conversions – Control = 10, Variant = 25

If plug these numbers into the calculator, we’ll that it met the confidence interval of 95%. The issue is that the sample size needed to detect a lift 1.5% with base conversions of 1% (10 conversions / 1000 visitors) would be nearly 70,000.

Just because a confidence interval was met does NOT mean that sample size is large enough to be statistically significant. This is a huge misunderstanding in the A/B community and needs to be called out.

Colton, you are CORRECT. We would never rely on an AB test with only 35 conversions.

In this case, we can look at the POWER of the test. Here’s another good tool that calculates the power for you.

The example you present shows a 150% increase, a change so significant that it has a power of 97%.

So, as a business owner, can I accept the risk that this is a false positive (<3%) or will I let the test run 700% longer to hit pure statistical significance?

Trackbacks & Pingbacks

[…] most promising ideas that don’t fall into the “fix it” category get an AB test. This will tell you which conversion rate hacks will improve the site and by how much. It is the […]

[…] This is why we test. […]

[…] is my bread and butter. To be transparent: I’m the CEO of Convert.com. We make A/B testing software and have been investing pretty heavily in GDPR […]

[…] Master the ins and outs OF ONE TRAFFIC SOURCE. Focus on promoting and tweaking your campaigns via A/B testing and make sure you use a good tracking solution so you can analyze the data and focus on what works […]

[…] big and put them where the visitor expects them to be. If you’re looking for good ideas for testing button design, consider the game of […]

[…] didn’t stop with this input. We removed the confusing features from the feature list, and designed an AB test to collect some observational […]

[…] the results or B: use an A/B test. For those of you that don’t know what A/B testing is there an awesome guide over at Conversion […]

[…] experience is one of the more complicated optimization puzzles you’ll tackle, which is why sound A/B testing is such an instrumental part of the […]

[…] testing is distinct from A/B testing in that it involves the simultaneous observation and analysis of more than one outcome […]

[…] AB testing certainly isn’t a new topic here at Conversion Sciences. It’s also not a new topic at Etsy, where the team has been fostering a culture of continuous split testing. […]

[…] Experiment with ordering your pricing plans from most expensive to least expensive. This one is pretty simple, and there’s not much to say except that there is data to suggest this is worth a split test. […]

[…] you know good headings, subheads, and headlines are crucial to improving conversion rates (and A/B testing them is simple), but the actual process of creating the right ones can get […]

[…] At Conversion Sciences, we have a checklist that our team goes through when evaluating a new client website, and today, we’re going to share that checklist with you. This checklist includes virtually everything you’ll want to consider optimizing while putting together your own A/B testing campaign. […]

[…] The Ultimate Guide To A/B Testing […]

[…] If you aren’t A/B testing, you are leaving a ton of money on the table. […]

[…] We recently partnered with NinjaTropic to produce a new video introducing readers to A/B testing as part of our ongoing A/B testing guide. […]

[…] copywriting, but if you are going to break away from best practices, you absolutely need to begin A/B testing your […]

[…] with a copywriter is a great opportunity for AB testing. Instead of simply throwing up something new and hoping it works… test, test, […]

[…] The Ultimate A/B Testing Guide: Everything You Need, All In One Place December 15, 2016 […]

[…] Unexpected Website Forumulas of The Conversion Scientist. Conversion Sciences specializes in A/B Testing of websites. Follow Brian on Twitter @bmassey By SEO Guru| 2016-11-07T17:14:33+00:00 November […]

[…] Pour aller plus loin vous pouvez aussi consulter sur ConversionScience le son guide « Comment le A/B testing améliore les sites Internet« . […]

[…] Unexpected Website Forumulas of The Conversion Scientist. Conversion Sciences specializes in A/B Testing of websites. Follow Brian on Twitter […]

[…] point of AB testing is to make changes and be able to say with confidence that what you did caused conversion rates […]

[…] test. The image is one variable and the headline is a second variable. Technically, this is an AB test with two variables changing. If Variation (B) generated more leads, we wouldn’t know if the image […]

[…] test. The image is one variable and the headline is a second variable. Technically, this is an AB test with two variables changing. If Variation (B) generated more leads, we wouldn’t know if the […]

[…] is a great lesson in how statistical significance and sample size can build or devastate our […]

[…] every site should benefit from it. We take every chance to teach businesses about it. The AB testing process is an important part of conversion optimization and is within reach of almost any business […]

[…] This process only works if you know how to do A/B testing. […]

Leave a Reply

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

I agree to the terms and conditions laid out in the Privacy Policy

Conversion Sciences | Best Conversion Optimization Agency

  • Optimization Services
  • Guaranteed Redesign
  • Fully-managed CRO
  • AI Optimization
  • CRO Training & Coaching
  • Conversion Solutions
  • eCommerce Optimization
  • Lead Generation Solutions
  • CRO for Website Redesign
  • CRO for Mobile Lead Gen
  • CRO for Advertising
  • About Conversion Sciences
  • Success Stories
  • CRO Resources
  • Your Customer Creation Equation Book
  • Free CRO Course
  • CRO Calculator
  • Press & Speaking Dates

This site uses cookies. By continuing to browse the site, you agree to our use of cookies. Or find out how to manage cookies .

Cookie and Privacy Settings

We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.

Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.

These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.

We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.

We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.

We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.

Google Webfont Settings:

Google Map Settings:

Google reCaptcha Settings:

Vimeo and Youtube video embeds:

You can read about our cookies and privacy settings in detail on our Privacy Policy Page.

What is A/B testing? And why is it so important?

What is A/B testing? And why is it so important?

A/B testing is a key tool for marketers, product managers, engineers, UX designers, and more. It's the epitome of making data-driven decisions. Learn what it is, how it works, and how you can build an effective A/B testing strategy.

What is A/B testing?

How does a/b testing work.

  • Why consider A/B testing?
  • Important concepts to know
  • How to choose which type of test to run
  • How to perform an A/B test
  • Server-side vs. client-side testing
  • Interpreting your results

How do companies use A/B testing?

  • A/B testing tools
  • Form abandonment: How to avoid it and increase your conversion rates
  • 5 rapid-fire A/B testing tactics for digital experience
  • 5 experimentation strategy tips for an unpredictable future
  • Why is product analytics so important?

Key takeaways:

A/B testing for websites entails comparing a control version of a webpage (A) with a variant (B), randomly presented to users, to identify which performs better in meeting specified goals through statistical analysis.

A/B testing is one of the single best tools in your toolbox to increase your conversion rates and revenue

Test types include split URL testing, multivariate testing, and multipage testing

There are many tools available to help teams roll out A/B tests on websites, apps, ads, and in emails

Effective A/B test hypotheses are always grounded in data, highlighting the importance of product analytics

A/B testing, also known as split or bucket testing, is a method where a control version of content (A) and a variant (B) are randomly presented to users to determine through statistical analysis which one more effectively meets specific conversion goals and appeals to viewers. It originated with Ronald Fisher , a 20th-century biologist and statistician widely credited with developing the principles and practices that make this method reliable.

As it applies to marketing, A/B testing can be traced to the 1960s and 1970s, when it was used to compare different approaches to direct response campaigns.

Now, A/B testing is used to evaluate all sorts of initiatives, from emails and landing pages to websites and apps. While the targets of A/B testing have changed, the principles behind it have not.

Below, we discuss how A/B testing works, different strategies behind the experimentation process, and why it’s critical to your success.

A/B testing is a type of experiment. It tests two different versions of a website, app, or landing page. This experiment shows the differences in user responses. It helps to gauge user response.

Statistical data is collected and analyzed to determine which version performs better.

A/B testing compares two versions of a webpage. The control version is known as variation A. Variation B contains the change that is being tested.

A/B testing is sometimes referred to as random controlled testing (RCT) because it ensure sample groups are assigned randomly. This helps produce better results.

Why should you consider A/B testing?

When there are problems in the conversion funnel , A/B testing can be used to help pinpoint the cause. Some of the more common conversion funnel leaks include:

Confusing calls-to-action buttons

Poorly qualified leads

Complicated page layouts

Too much friction is leading to form abandonment on high value pages

Checkout bugs or frustration

A/B testing can be used to test various landing pages and other elements to determine where issues are being encountered.

Solve visitor pain points

When visitors come to your website or click on a landing page, they have a purpose, like:

Learning more about a deal or special offer

Exploring products or services

Making a purchase

Reading or watching content about a particular subject

Even “browsing” counts as a purpose. As users browse, they might encounter roadblocks that make it difficult for them to complete their goals.

For example, a visitor might encounter confusing copy that doesn't match the PPC ad they clicked on. Or a CTA button might be difficult to find. Or maybe the CTA button doesn't work at all.

Every time a user encounters an issue that makes it difficult for them to complete their goals, they might become frustrated. This frustration degrades the user experience, lowering conversion rates.

There are several tools you can use to understand this visitor behavior. Fullstory, for example, uses heatmaps , funnel analysis, session replay , and other tools, to help teams perfect their digital experiences.

By analyzing this data, you can identify the source of user pain points and start fixing them.

Use both quantitative and qualitative data when analyzing a problem. This will help identify the issue and understand the cause. No matter which tool you choose, make sure to combine both types of data.

How Thomas used Fullstory and a robust A/B testing program to boost conversions 94%

"Fullstory helps us keep users top-of-mind, but also gives our co-workers a systematic way to propose evidence-based ideas for improvement."

Read the success story

Get better ROI from existing traffic

If you're already getting a lot of incoming traffic, A/B testing can help you boost the ROI from that traffic.

Through improving conversion funnels, data from A/B testing can also help businesses maximize existing traffic ROI. 

A/B testing helps to identify which changes have a positive impact on UX and improve conversions. This approach is often more cost-effective than investing in earning new traffic.

Reduce bounce rate

Bounce rate is a metric that calculates how often someone arrives on your site, views one page, then leaves.

Bounce rate is calculated by dividing the number of single-page sessions by the number of total users sessions on your website.

There are also other ways to define a bounce rate, but they all imply the same thing: disengaged users.

Essentially, a high bounce rate indicates that people enter your website, encounter something confusing or frustrating, then quickly leave.

This type of friction is a perfect example of when to A/B test. You can identify the specific pages where visitors are bouncing and change things you think are problematic. Then, use A/B testing to track the performance of different versions until you see an improvement in performance.

These tests can help identify visitor pain points and improve UX overall.

Hierarchy of Friction

Make low-risk modifications

There's always a risk in making major changes to your website. 

You could invest thousands of dollars in overhauling a non-performant campaign. However, if those major changes don't pay off, then you won’t see a return on that investment. Now you've lost lots of time and money.

Instead, A/B testing to make small changes rather than implementing a total redesign . 

That way, if a test fails, you have risked much less time and money.

Achieve statistically significant results

It’s important to recognize that A/B testing only works well if the sampling is statistically significant. It doesn’t work if testers rely on assumptions or guesses in setting up the tests or analyzing the results.

Statistical significance is used to determine how meaningful and reliable an A/B test is. The higher the statistical significance, the more reliable a result is. 

Statistical significance is "the claim that a set of observed data are not the result of chance but can instead be attributed to a specific cause."

If a test is not statistically significant, there could be an anomaly, such as a sampling error. And if the results are not statistically significant, they shouldn’t be considered meaningful. 

The ideal A/B test results should be 95% statistically significant. Though sometimes testing managers use 90% so the required sample size is smaller. This was statistically significant results arrive faster.

There can be challenges to reaching a sufficient level of statistical significance:

Not enough time to run tests

Pages with exceptionally low traffic

Changes are too insignificant to generate results

It's possible to achieve statistical significance faster by running tests on pages that get more traffic or by making larger changes in your tests. In many cases, a lack of traffic makes it nearly impossible to get results that are significant enough.

Understanding sample sizes and statistical significance also helps you plan how long your tests will take .

Redesign websites to increase future business gains

If you do engage in a full website redesign, A/B testing can still be helpful.

Like any other A/B testing, you will make two versions of the site available. Then, you will measure the results after you have received a statistically significant number of visitors.

A/B testing should not end once you’ve gone live. Instead, this is the time to begin refining elements within your site and testing those.

Important ideas to know while testing

There are specific strategies to consider when testing: Multipage testing, split URL, dynamic allocation, and multivariate testing.

Split URL testing

Split URL tests are used for making significant changes to a webpage in situations when you don't want to make changes to your existing URL.

In a split URL test, your testing tool will send some of your visitors to one URL (variation A) and others to a different URL (variation B). At its core, it is a temporary redirect.

When should you consider split URL testing? 

In general, use this if you are making changes that don’t impact the user interface. For example, if you are optimizing page load time or making other behind-the-scenes modifications.

Larger changes to a page, especially to the top of a page, can also sometimes “flicker” when they load, creating a jarring UX. Split URLs are an easy way to avoid this.

Split URL testing is also a preferred way to test workflow changes. If your web pages display dynamic site content, you can test changes with split URL testing.

Multivariate testing

Multivariate testing is a more complex form of testing, and an entirely different test type from A/B.

It refers to tests that involve changes to multiple variations of page elements that are implemented and tested at the same time. This approach allows testers to collect data on which combination of changes performs best.

Multivariate testing eliminates the need to run multiple A/B tests on the same web page when the goals of each change are similar to one another. Multivariate testing can also save time and resources by providing useful conclusions in a shorter period.

For example, instead of running a simple A/B test on a page, let's say you want to run a whole new multi-page experience. You want step 1 to be either variation A or B, and you want step 2 to be either C or D.

When you run a multivariate test, you'll be running many combinations of these variations:

A multivariate test is, essentially, an easier way to run multiple A/B tests at once.

Because there are more variations in multivariate tests than A/B tests, multivariate requires more traffic to achieve statistical significance. This means it will probably take longer to achieve reliable results.

Multipage testing

A multipage test involves changes to specific elements. Instead of testing the change you make on one page, you apply that change to multiple pages, such as every page within a particular workflow. 

In that case, you would use the sales funnel to gauge the results. Then, you test the new pages against the control. This approach is known as "funnel multipage testing ."

The second is to add or remove repeating items such as customer testimonials or trust indicators. Then, test how those changes affect conversions. This approach is known as "conventional" or "classic multipage testing."

Dynamic allocation

Dynamic allocation is a method of quickly eliminating test low-performing variations. This method helps to streamline the testing process and save time. It's also known as a multi-armed bandit test .

Let’s say you’re an online retailer and are holding a flash sale. You know you want as many people as possible to view your sale items when they arrive on your site. To do that, you want to show them a CTA color that gets as many people to click on it as possible — blue, pink, or white.

Using a dynamic allocation test, your testing tool will automatically detect which variation drives the most clicks and automatically show that variation to more users. This way you drive as many clicks as possible as fast as possible. 

Because traffic is not split equally, dynamic allocation doesn't yield statistical significance and doesn’t yield any learnings you can use in the future.

Dynamic allocation is for quick conversion lifts, not learning.

How do you choose which type of test to run?

There are several factors to consider when deciding which tests to run for conversion rate optimization (CRO) testing . You should think of:

The number of changes you’ll be making

How many pages are involved

The amount of traffic required to get a statistically significant result

Finally, consider the extent of the problem you are trying to solve. For example, a conversion on a landing page that can benefit by changing a button color would be a perfect use case for A/B testing. 

However, changing multiple pages that a user encounters across their customer journey would be a better fit for multipage testing.

How do you perform an A/B test?

An A/B test is a method of testing changes to determine which changes have the desired impact and which do not.

While organizations once only occasionally turned to A/B testing, now many teams across 51% of top sites A/B test to improve customer experience and boost conversions.

The A/B testing process

Step 1: Research

Before any tests can be conducted or changes made, it's important to set a performance baseline. Collect both quantitative and qualitative data to learn how the website in question is performing in its current state.

The following elements represent quantitative data :

Bounce rate

Video views

Subscriptions

Average items added to the cart

Much of this information can be collected through a behavioral data platform like Fullstory .

Qualitative data includes information collected on the user experience through polls and surveys. Especially when used in conjunction with more quantitative data, it is valuable in gaining a better understanding of site performance.

Step 2: Observe and formulate a hypothesis

At this stage, you analyze the data you have and write down the observations that you make. This approach is the best way to develop a hypothesis that will eventually lead to more conversions. In essence, A/B testing is hypothesis testing.

Step 3: Create variations

A variation is simply a new version of the current page that contains any changes you want to subject to testing. This alteration could be a change to copy, headline, CTA button, etc.

Step 4: Run the test

You’ll select a testing method here according to what you are trying to accomplish, as well as practical factors like expected traffic. The length of the test will also depend on the level of statistical accuracy you want. Remember, a higher statistical significance is more reliable, but requires more traffic and time.

Step 5: Analyze results and deploy the changes

At this point, you can go over the results of the tests and draw some conclusions. You may determine that the test was indeed conclusive and that one version outperformed the other. In that case, you simply deploy the desired change.

But that doesn't always happen. You may need to add and test an additional change to gain additional insights. Additionally, you might decide to move on to testing changes in another part of the workflow. With A/B testing, you can work through all of the pages with a customer journey to improve UX and boost conversions.

There are some best practices to use while A/B testing , but understand that all sites, apps, and experiences are different. The true “best practices” for you can only be truly understood through testing.

Server-side vs. client-side A/B testing

With client-side testing, a website visitor requests a particular page. This page is delivered by the webserver. However, javascript is executed within the visitor's browser session and adjusts what is presented. 

This adjustment is based on which variation they see according to the targeting you set up in the A/B test. This form of testing is used for visible changes such as fonts, formats, color schemes, and copy.

Server-side testing is a bit more robust. It allows for the testing of additional elements. For example, you would use this form of testing to determine whether speeding up page load time increases engagement. You would also use server-side testing to measure the response to workflow changes.

How do you interpret the results of an A/B test?

The results of an A/B test are measured based on the rate of conversions that are achieved. The definition of conversion can vary. It might include a click, video view, purchase, or download.

This step is also where that 95% statistical significance comes into play. After multiple test runs, 95% will indicate the true rate of conversion. However, you also have to consider a margin of error. 

So if that margin is ±3%, that can be interpreted as follows: If you achieve a conversion rate of 15% on your test, you can say that conversions are between 12% and 18% with 95% confidence.

Frequentist approach

There are two ways to interpret A/B testing results. 

The first is the frequentist approach. This approach is based on the assumption that there are no differences between A and B. 

Once testing ends, you will have a p-value or probability value . This value is the probability that there is no difference. So a low p-value means that there is a high likelihood of differences.

The frequentist approach is fast and popular, and there are many resources available for using this method. The downside is that it's impossible to get any meaningful results until the tests are fully completed. Also, this approach doesn't tell you how much a variation won by — just that it did.

Bayesian approach

The Bayesian approach involves the use of existing data in the experiment. These are known as priors, with the first prior being "none" in the first round of tests. In addition to this, there are also evidences, which is the data from the current experience. 

Finally, there is the posterior. This figure is the information that is produced as the result of the Bayesian analysis of the prior and the evidences.

The key benefit of the Bayesian approach is that you can look at the data during the test cycle. Then, you may call the results if it is clear that there is a clear winner. Additionally, you are able to identify a winning variation. 

A/B testing can be used by brands for many different purposes. As long as there is some sort of measurable user behavior, it's possible to test that.

The A/B test method is often used to test changes to website design, landing pages, content titles, marketing campaign emails, paid ads, and online offers. Generally, this testing is done without the test subjects knowing they are visiting Test Version A of a web page as opposed to Version B.

Stage 1: Measure

In the planning stage, the idea is to identify ways to increase revenue by increasing conversions. Stage one includes analyzing website data and visitor behavior metrics.

Once this information has been gathered, you can use it to plan changes and create a list of website pages or other elements to be changed. After this is done, you may create a hypothesis for each element to be changed.

Stage 2: Prioritize

Set priorities for each hypothesis depending on the level of competence, importance, and ease of implementation. 

There are frameworks available to help with the process of setting these priorities, for example, the ICE, PIE, or LIFT models.

ICE is importance/confidence/ease. 

Importance — How important is the page in question

Confidence — The level of confidence the test will succeed

Ease — How easy is it to develop the test

Each item is scored, and an average is taken to rate its priority.

PIE is potential/impact/ease.

Potential — The business potential of the page in question

Impact — The impact of the winning test

Ease — How easily the test can be executed

The variables here are slightly different from ICE but are scored in the same way. 

PIE and ICE are both easy to use, but the downside is that they are subjective. People will apply their own biases and assumptions when scoring these variables.

LIFT Model is the third framework for analyzing customer experiences and developing hypotheses. This framework is based on six factors:

Value proposition — The value of conversions

Clarity — How clearly the value proposition and CTA are stated

Relevance — Relevance of the page to the visitor

Distraction — Elements that distract visitors from the CTA

Urgency — Items on the page that encourage visitors to act quickly

Anxiety — Anything creating hesitance or lack of credibility

Stage 3: Test

After prioritizing, determine which ideas will be tested and implemented first by reviewing the backlog of ideas. You should decide according to business urgency, resources, and value.

Once an idea is selected, the next step is to create a variation. Then, go through the steps of testing it.

Stage 4: Repeat

It's risky to test too many changes at the same time. Instead, test more frequently to improve accuracy and scale your efforts.

The top A/B testing tools to use

There are several tools available to help businesses set up, execute, and track A/B tests. They all vary both in price and capability. 

The best web and app A/B testing tools

Optimizely is a platform for conversion optimization through A/B testing. Teams can use the tool to set up tests of website changes to be experienced by actual users. These users are routed to different variations; then, data is collected on their behavior. Optimizely can also be used for multipage and other forms of testing.

AB Tasty offers A/B and multivariate testing. Testers can set up client-side, server-side, or full-stack testing. Additionally, there are tools like Bayesian Statistics Engine to track results.

Both Optimizely and AB Tasty seamlessly integrate with Fullstory so you can see how users seeing different experience behave.

VWO is the third big player in A/B testing and experimentation software. Like Optimizely and AB Tasty, they offer web, mobile app, server-side, and client-side experimentation, as well as personalized experiences.

Email A/B testing tools

There are several specialized tools for testing changes made to marketing campaign emails. Here are the most widely used:

Moosend is a tool for creating and managing email campaigns. It offers the ability to create an A/B split test campaign. This ability lets marketers test different variations of marketing emails, measure user response, and select the version that works best.

Aweber provides split testing of up to three emails. Users can test elements like subject line, preview text, and message content. Additionally, it allows for other test conditions such as send times. Testing audiences can be segmented if desired, and completely different emails can be tested against one another.

MailChimp users can A/B test email subject line, sender name, content, and send time. There can be multiple variations of each variable. 

Then, the software lets users determine how the recipients will be split among each variation. Finally, testers can select the conversion action and amount of time that indicates which variation wins the test. For example, the open rate over a period of eight hours. 

Constant Contact

Constant Contact offers subject line A/B testing. This feature helps users validate which version of an email subject line is most effective. It is an automated process where the tool automatically sends emails with the winning subject line to recipients once the winner is determined.

A/B testing and CRO services and agencies

Some companies have the infrastructure and personnel in place to run their own experimentation program, but other companies might not. Fortunately, there are services and agencies available to help drive your A/B testing and CRO efforts.

Based in the UK, Conversion is one of the world's largest CRO agencies and work with brands like Microsoft, Facebook, and Google.

Lean Convert

Also based in the UK, Lean Convert is one of the leading experimentation and CRO agencies.

Prismfly is an agency that specializes in ecommerce CRO, UX/UI design, and Shopify Plus development.

Frequently asked questions about A/B testing

How do you plan an a/b test.

There are several steps to planning an A/B test. These include:

Choosing the element to test

Creating a hypothesis

Determining the conversion goal

Identifying the control and implementing the change to test

Creating the split and selecting the sample size

Calculating the statistical significance required

Running the test

Analyzing the results

This approach should lead to data-based conclusions that will help you determine which variation to implement or identify the need for further testing.

What should I A/B test?

Test any changes that have the potential to change the user experience and impact conversions. This list includes various elements found within marketing emails, landing pages, and web pages.

How much time does A/B testing take?

A/B tests should be run until enough data is collected for any conclusions to be considered reliable. This term will depend on the amount of traffic and the statistical confidence level that testers select. 

Using a sample size calculator can help estimate how long any given test should run.

Can I test more than one thing at a time?

It is possible to test more than one variation. However, that may or may not be the best approach. 

For example, running multiple variations against a control can lead to false negatives. It can also create the need for relentlessly complicated testing setups. This conundrum is referred to as the multiple testing problem . 

However, there are times when testing multiple changes is useful, especially when a combination of elements is hypothesized to impact conversions. If multi-testing is necessary, it may be advisable to use multivariate testing instead of classic A/B testing.

Additional resources about A/B testing

A/B testing is utilized by teams of all stripes for quick experimentation. Learn how DXI tools can take these tests a step deeper.

A comprehensive guide to conversion rate optimization, how important it is, and how you can use it to boost your bottom line.

A comprehensive guide to product analysis and analytics platforms, how important they are, and why they’re a valuable asset for your bottom line.

Most websites or apps see high rates of form abandonment. Here's how to identify where and why users are dropping off—and how to fix the problem.

3 A/B Testing Examples That You Should Steal [Case Studies]

' src=

Neil Patel co-founded Crazy Egg in 2005. 300,000 websites use Crazy Egg to understand what’s working on their website (with features like Heatmaps, Scrollmaps, Referral Maps , and User Recordings ), fix what isn’t (with a WYSIWYG Editor), and test new ideas (with a robust A/B Testing tool ).

There’s a joke in the marketing world that A/B testing actually stands for “Always Be Testing.” It’s a good reminder that you can’t get stellar results unless you can compare one strategy to another, and A/B testing examples can help you visualize the possibilities.

I’ve run thousands of A/B tests over the years, each designed to help me hone in on the best copy, design , and other elements to make a marketing campaign truly effective.

I hope you’re doing the same thing. If you’re not, it’s time to start. A/B tests can reveal weaknesses in your marketing strategy , but they can also show you what you’re doing right and confirm your hypotheses.

Without data, you won’t know how effective your marketing assets truly are.

To refresh your memory — or introduce you to the concept — I’m going to explain A/B testing and its importance. Then, we’ll dive into a few A/B testing examples that will inspire you to create your own tests.

To Recap: What Exactly Is an A/B Test?

An A/B test is a comparison between two versions of the same marketing asset, such as a web page or email, that you expose to equal halves of your audience. Based on conversion rates or other metrics, you can decide which one performs best.

But it doesn’t stop there. You don’t want to settle for one A/B test. That will give you very limited data. Instead, you want to keep testing to learn more about your audience and find new ways to convert prospects into leads and leads into customers.

Remember: Always Be Testing.

For instance, you might start with the call to action on a landing page . You A/B test variations in the button color or the CTA copy. Once you’ve refined your CTA, you move on to the headline. Change the structure, incorporate different verbs and adjectives, or change the font style and size.

You might also have to test the same things multiple times. As your audience evolves and your business grows, you’ll discover that you need to meet new needs — both for the company and for your audience. It’s an ever-evolving process that can ultimately have a huge impact on your bottom line.

An A/B test runs until you have enough data to make a solid decision. This depends, of course, on the number of people who see one variation or the other. You can run multiple A/B tests at the same time, but stick to one variable for each.

In other words, if you’re testing the headline on a landing page , you might test the subject line for your latest email marketing campaign . Changing just one variable ensures that you know what had an impact on your audience’s responses.

When (And How) Do You Know You Need to Run A/B Tests?

ab-testing-examples-when-and-how

The simple answer to this question is that you should always run A/B tests. If you have a website, a business, and an audience, testing right away gives you an advantage over the competition.

Realistically, though, you’ll get more accurate results with an established business than a brand new one. This is because an established business has already begun to generate targeted traffic and qualified leads, so the results will be pretty consistent with the target market.

This doesn’t mean A/B testing is useless for a new business. It just means that you might get less accurate results.

The best time to run A/B tests is when you want to achieve a goal. For instance, if you’re not satisfied with your conversion rate on your homepage, A/B test some changes to the copy, images, and other elements. Find new ways to persuade people to follow through on the next step in the conversion process.

3 of The Best A/B Testing Examples to Inspire You (Case Studies)

Now, it’s time for the proof. Let’s look at three A/B testing examples so you can see how the process works in action. I’ll describe each test, including the goal, the result, and the reason behind the test’s success.

Example 1: WallMonkeys

ab-testing-wall-monkeys

If you’re not familiar with WallMonkeys , it’s a company that sells incredibly diverse wall decals for homes and businesses.

The company used Crazy Egg to generate user behavior reports and to run A/B tests. As you’ll see below, the results were pretty incredible.

WallMonkeys wanted to optimize its homepage for clicks and conversions. It started with its original homepage, which featured a stock-style image with a headline overlay.

ab-testing-wall-monkeys-goal

There was nothing wrong with the original homepage. The image was attractive and not too distracting, and the headline and CTA seemed to work well with the company’s goals.

First, WallMonkeys ran Crazy Egg Heatmaps to see how users were navigating the homepage. Heatmaps and Scrollmaps allow you to decide where you should focus your energy. If you see lots of clicking or scrolling activity, you know people are drawn to those places on your website.

ab-testing-wall-monkeys-heatmap

As you can see, there was lots of activity on the headline, CTA, logo, and search and navigation bar.

After generating the user behavior reports. WallMonkeys decided to run an A/B test. The company exchanged the stock-style image with a more whimsical alternative that would show visitors the opportunities they could enjoy with WallMonkeys products.

Conversion rates for the new design versus the control were 27 percent higher.

However, WallMonkeys wanted to keep up the testing. For the next test, the business replaced its slider with a prominent search bar. The idea was that customers would be more drawn to searching for items in which they were specifically interested.

The second A/B testing example resulted in a conversion rate increase of 550 percent . Those are incredible results, even for a company as popular as WallMonkeys. And by not stopping at the first test, the company enjoyed immense profit potential as well as a better user experience for its visitors.

Why it works

Before you start an A/B test, you perform a hypothesis — ideally based on data. For instance, WallMonkeys used Heatmaps and Scrollmaps to identify areas of visitor activity, then used that information to make a guess about an image change that might lead to more conversions.

They were right. The initial 27 percent increase might seem small in comparison to the second test, but it’s still significant. Anyone who doesn’t want 27 percent more conversions, raise your hand.

Just because one A/B test yields fruitful rewards doesn’t mean that you can’t do better. WallMonkeys realized that, so they launched another test. It proved even more successful than the first.

When you’re dogged about testing website elements that matter, you can produce startling results.

Example 2: Electronic Arts

When Electronic Arts , a successful media company, released a new version of one of its most popular games, the company wanted to get it right. The homepage for SimCity 5 , a simulation game that allows players to construct and run their own cities, would undoubtedly do well in terms of sales. Electronic Arts wanted to capitalize on its popularity.

According to HubSpot , Electronic Arts relied on A/B testing to get its sales page for SimCity 5 just right.

The desire behind this A/B testing example revolved around improving sales. Electronics Arts wanted to maximize revenue from the game immediately upon its release as well as through pre-sale efforts.

These days, people can buy and download games immediately. The digital revolution has made those plastic-encased CDs nearly obsolete, especially since companies like Electronic Arts promote digital downloads. It’s less expensive for the company and more convenient for the consumer.

However, sometimes the smallest things can influence conversions. Electronic Arts wanted to A/B test different versions of its sales page to identify how it could increase sales exponentially.

I’m highlighting this particular A/B testing example because it shows how hypotheses and conventional wisdom can blow up in our faces. For instance, most marketers assume that advertising an incentive will result in increased sales. That’s often the case, but not this time.

The control version of the pre-order page offered 20 percent off a future purchase for anyone who bought SimCity 5.

ab test hypothesis format

The variation eliminated the pre-order incentive.

ab-testing-simcity-without-incentive

As it turns out, the variation performed more than 40 percent better than the control. Avid fans of SimCity 5 weren’t interested in an incentive. They just wanted to buy the game. As a result of the A/B test, half of the game’s sales were digital.

The A/B test for Electronic Arts revealed important information about the game’s audience. Many people who play a popular game like SimCity don’t play any other games. They like this particular franchise. Consequently, the 20-percent-off offer didn’t resonate with them.

If you make assumptions about your target audience, you’ll eventually get it wrong. Human behavior is difficult to understand even without hard data, so you need A/B testing to generate data on which you can rely. If you were surprised by the result of this A/B test, you might want to check out others that we highlighted in a recent marketing conference recap.

Example 3: Humana

Humana , an insurance carrier, created a fairly straightforward A/B test with huge results. The company had a banner with a simple headline and CTA as well as an image. Through A/B testing, the company realized that it needed to change a couple things to make the banner more effective. This is the second of two A/B testing examples that reveals one test just isn’t enough.

According to Design For Founders , Humana wanted to increase its click-through rate on the above-described banner. It looked good as-is, but the company suspected it could improve CTR by making simple changes.

They were right.

ab-testing-humana-goal

The initial banner had a lot of text. There was a number in the headline, which often leads to better conversions, and a couple lines with a bulleted list. That’s pretty standard, but it wasn’t giving Humana the results they wanted.

The second variation reduced the copy significantly. Additionally, the CTA changed from “Shop Medicare Plans” to “Get Started Now.” A couple other changes, including the image and color scheme, rounded out the differences between the control and eventual winner.

Simply cleaning up the copy and changing the picture led to a 433 percent increase in CTR. After changing the CTA text, the company experienced a further 192 percent boost.

This is another example of incredible results from a fairly simple test. Humana wanted to increase CTR and did so admirably by slimming down the copy and changing a few aesthetic details.

Simplicity often rules when it comes to marketing. When you own a business, you want to talk about all its amazing features, benefits, and other qualities, but that’s not what consumers want.

They’re looking for the path of least resistance. This A/B testing example proves that people often respond to slimmed-down copy and more straightforward CTAs.

Start Your Own A/B Testing Now

It’s easy to A/B test your own website, email campaign, or other marketing efforts. The more effort you put into your tests, the better the results.

Start with pages on your website that get the most traffic or that contribute most to conversions. If an old blog post hardly gets any views anymore, you don’t want to waste your energy.

Focus on your home page, landing pages, sales pages, product pages, and similar parts of your site. Identify critical elements that might contribute to conversions.

As mentioned above, you can run A/B tests on multiple pages at the same time. You just want to make sure you’re only testing one element on a single page. After that test ends, you can test something else.

How Crazy Egg’s A/B Testing Tool Can Help You Boost Results

Crazy Egg offers several A/B testing benefits that other tools don’t offer. For one thing, it’s super fast and simple to set up. You can have your first A/B test up and running in just a few minutes. Other tools require hours of work.

You don’t need any coding skills to conduct tests like the A/B testing examples above. The software does the heavy lifting for you. Additionally, you don’t have to worry about checking every five minutes to know when to shut down the losing variant; Crazy Egg’s multi-armed bandit testing algorithm automatically funnels the majority of traffic to the winner for you.

Along with generating user behavior reports to help you decide what to test, you can test in a continuous loop so there’s no downtime in your quest for marketing success.

Start A/B testing now!

ab-testing-conclusion

The A/B testing examples illustrated above give you an idea of the results you can achieve through testing. If you want to boost conversions and increase sales , you need to make data-driven decisions.

Guesswork puts you at a disadvantage, especially since your competitors are probably conducting their own tests. If you know a specific element on your page contributes heavily to conversions, you can trust it to keep working for you.

Plus, you get to see the results of your test in black-and-white. It’s comforting to know that you’re not just throwing headlines and CTAs at your audience, but that you know what works and how to communicate with your target customers.

Make your website better. Instantly.

Keep reading about conversion.

customer-journey-31

How to Create Customer Journey Maps You Can Actually Use

Customer journey maps tell the story of how people behave when interacting with your brand across different touch points. Creating a journey map helps businesses…

ab test hypothesis format

Best Auto Dialer Software Compared

Do you know the best-kept secret to selling over the phone? Not wasting time. That’s where autodialer software enters the picture. It eliminates wasted time…

ab test hypothesis format

What Is a Good Conversion Rate? The Answer Might Surprise You

What is a good conversion rate for your online business? And how do your company’s conversion rates stack up against the competition? These are questions…

ab test hypothesis format

Poor Sales? Maybe You Need a Website Redesign: Here’s How

Neil Patel co-founded Crazy Egg in 2005. 300,000 websites use Crazy Egg to understand what’s working on their website (with features like Heatmaps, Scrollmaps, Referral…

Domains.com Search Bar

How to Start a Business for Less Than $100

An idea and a crisp $100 bill is all you need to start your new business. Seriously. And we’re about to show you how. In…

ab test hypothesis format

How to Start an Online Store and Make Your First Sale in 2024

Want to get straight into it? We highly recommend using Wix to start your online store. It’s easy to use, perfect for beginners, and incredibly…

growth-hacking-introduction

Growth Hacking: The 12 Best Techniques to Boost Conversions

ab test hypothesis format

Website Design – Beginner’s Guide

Creating a website simple. You don’t have to be a tech expert or designer. Anyone can create a stunning website today. There are tons of…

Page Speed Conversions

20 Tricks to Double Your Ecommerce Conversion Rate

Want to earn hundreds and thousands of dollars per year online? You can, with ecommerce. Whether you’re starting a new ecommerce shop from scratch or…

ab test hypothesis format

Hack Your Thank You Page: 5 Ideas for Driving Even More Conversions

When it comes to on-site optimization, landing pages, product pages and checkout pages receive the most attention. However, there is one page that a lot…

How to Create an Effective CRM Strategy to Maximize Conversions

CRM Strategy – The Complete Guide

Here’s the truth. CRM solutions will not magically solve all of your problems. And adoption does not necessarily guarantee success. It’s not only about investing…

optimize-website-seo-conversions-introduction

How to Optimize Your Website for SEO and Conversions

Learning how to optimize your website for SEO and conversions is crucial for your site’s success.  It’ll ensure your website is working exactly as hard…

ab test hypothesis format

5 Advantages and Benefits Of SEO For Your Website

If you’re a marketer or business owner, you’ve likely been told that you should be using SEO (Search Engine Optimization) more than once or twice….

customer-acquisition-8

The 8 SEO Tracking Metrics That Really Matter

SEO tracking is essential for evaluating your site’s success. After all, If you’re going to invest the time and budget it takes to create and…

lead magnet ideas

15 High-Converting Lead Magnet Examples and Ideas

Generating good lead magnet ideas can become a long process. Simply throwing together an e-book or whitepaper just because other businesses do it would be…

Over 300,000 websites use Crazy Egg to improve what's working, fix what isn't and test new ideas.

IMAGES

  1. How to Create a Strong A/B Testing Hypothesis?

    ab test hypothesis format

  2. How to create a winning hypothesis for your A/B test [Template

    ab test hypothesis format

  3. How to Create a Strong A/B Testing Hypothesis?

    ab test hypothesis format

  4. 8 Little Tricks In AB Testing To Achieve The Best Results

    ab test hypothesis format

  5. Descubre Qué es un test A/B y como implementarlo

    ab test hypothesis format

  6. The 3 Step Formula for Creating an A/B Testing Hypothesis

    ab test hypothesis format

VIDEO

  1. AB Testing (Hypothesis Tests)

  2. PM series-3: Product management- What is A/B Testing?

  3. T test Part 1 Hypothesis Set Up and Formula Discussion MBS First Semester Statistics Solution

  4. Pangaea's Moving Farther Apart Again Song

  5. Hypothesis Testing Made Easy: These are the Steps

  6. How to prepare and present a hypothesis in musculoskeletal cases?

COMMENTS

  1. The Ultimate A/B Testing Guide: Everything You Need, All In One Place

    AB testing (AKA "split testing") is the process of directing your traffic to two or more variations of a web page. ... An AB test is an example of statistical hypothesis testing, ... The complexities arrive in all the ways a given "sample" can inaccurately represent the overall "population", and all the things we have to do to ...

  2. What is A/B Testing? A Complete Guide

    A/B testing is sometimes referred to as random controlled testing (RCT) because it ensure sample groups are assigned randomly. This helps produce better results. ... In essence, A/B testing is hypothesis testing. Step 3: Create variations. ... Like Optimizely and AB Tasty, they offer web, mobile app, server-side, and client-side experimentation ...

  3. What is A/B Testing? A Practical Guide With Examples

    A. Invalid hypothesis: In A/B testing, a hypothesis is formulated before conducting a test. All the next steps depend on it: what should be changed, why should it be changed, what the expected outcome is, and so on. If you start with the wrong hypothesis, the probability of the test succeeding decreases. B. Taking others' word for it:

  4. Hypothesis Testing

    There are 5 main steps in hypothesis testing: State your research hypothesis as a null hypothesis and alternate hypothesis (H o) and (H a or H 1 ). Collect data in a way designed to test the hypothesis. Perform an appropriate statistical test. Decide whether to reject or fail to reject your null hypothesis. Present the findings in your results ...

  5. AB testing Guide: How to Get Started and Real-Life Examples

    Split your sample group equally and randomly if you're testing an email. If you're testing a web page, there's no fixed group, just let the test run its course. ... AB testing examples Test hypothesis: Cart page with top menu bar vs. Cart page without top menu bar. Let's take an example of an eCommerce company. It's new on the market ...

  6. A/B testing statistics: true and estimated value of conversion rate

    If you run multivariate test or A/B/n test (compare n different variants), you should use corrections (e.g. Bonferroni correction to keep confidence level at 95%. However, it might require you to run the test for an unreasonably long period of time. A/B test may show wrong results not only due to statistical errors, but to technical errors also.

  7. Example A/B test hypothesis

    According to Kyle Rush, Head of Optimization at Optimizely, a hypothesis is a key component of every test and should be tackled right after you identify the goals of the experiment. Here's his experiment process: Identify goals and key metrics. Create hypothesis. Estimate test duration with a sample size.

  8. How To Do Bayesian A/B Testing, FAST!

    Calculating the Risk. Now we have finally arrived to the important part: The Risk measure is the most important measure in Bayesian A/B testing. It replaces the P-value as a decision rule, but also serves as a stopping rule — since the Bayesian A/B test has a dynamic sample size. It is interpreted as "When B is worse than A, If I choose B ...

  9. 3 A/B Testing Examples That You Should Steal [Case Studies]

    Example 3: Humana. Humana, an insurance carrier, created a fairly straightforward A/B test with huge results. The company had a banner with a simple headline and CTA as well as an image. Through A/B testing, the company realized that it needed to change a couple things to make the banner more effective.