A/B testing is the practice of serving different versions of a web page, ad, or email to an audience to see what performs better. Marketers can spend hours debating over what tagline is best or what image is most compelling.
With A/B testing, these arguments can be settled conclusively. What's even more important: A/B testing allows marketing teams to proactively learn about their customers, their website, and their products in a way they simply couldn't without it. But successfully incorporating A/B testing into your marketing strategy requires understanding the principles behind this practice.
Read on to learn about A/B testing, including examples of tests you can run on your own ecommerce store.
What is A/B testing?
A/B testing, or split testing, is the most common testing method for online stores.
In a proper A/B test, the two (or more) versions are served over the same timeframe to randomly selected members of the target audience, as opposed to a before/after test in which the versions are shown sequentially.
Similarly, in an A/B test, only one variable is tested, even if there are multiple versions. For example, testing three different button texts would be an A/B test.
But testing multiple variables, like button texts and banner images, would be considered a multivariate test. Netflix uses advanced algorithmic testing to optimize their recommendation systems, while many ecommerce businesses rely on simpler A/B tests for immediate improvements.
A/B testing requires code to serve the different versions simultaneously to different users. In advertising A/B tests, Meta and Google do this automatically. Similarly, some email platforms like Klaviyo have built-in A/B testing functionality. Website A/B testing requires custom or third-party code, such as Google Optimize.
Benefits of A/B testing
All digital marketing efforts create data. Advertising campaigns provide data on click-through rates, websites on conversion rates, and so on. The main benefit of A/B testing is being able to gather the data you need to make specific decisions.
A/B tests always start with a specific hypothesis. For example, "Our sign-up form will convert better if we offer a 10% discount." The data allows the business to make a conclusion about the hypothesis. This helps it answer the immediate hypothesis, but also gives it more information about the business as a whole. In this example, it helps it understand how price-sensitive its customer is.
Although A/B testing is primarily a marketing activity, the insights it yields can be used for a variety of other business decisions, including UX, product development, branding, and sales.
7 A/B testing examples for ecommerce businesses
There are many types of A/B tests an online store can run. Below are some of the most common, impactful types of tests.
Header copy
Header copy testing focuses on optimizing the first impression visitors get when landing on your page. This refers to the header at the top of a page, typically a landing page. Since this is the first, largest text a visitor sees, it's a great way to test the most valuable first impression for your site.
For example, Gymshark tests different versions of the line "Power. Made to fail in," below.
Subject line
Email subject line testing drives open rates and determines whether your email marketing efforts succeed or fail. In email marketing, your subject line is your most important lever. It drives an email's open rate, and if your audience doesn't open your email, the rest of the email won't have any impact. It's a great way to test what catches your existing audience's eyes and improve email performance.
For example, DUER tests variants of the email subject line below, "Introducing: The Premium Dura Soft Midweight Tee."
Ad tagline
Ad tagline testing leverages platform capabilities to optimize messaging across Meta advertising and Google campaigns. Ad platforms allow for quick and easy A/B testing of lots of different copy variations. This creates a self-reinforcing loop—insights about ad taglines help inform future taglines and future versions to test.
For example, in this ad, BN3TH could test either the primary text ("Numb crotch? No thanks.") or headline ("Ride Longer & Comfier, Save 25%🚵♂️").
Call-to-action text
Call-to-action testing reveals user intent and improves conversion rates across all touchpoints. Call-to-action text can be tested on a website, ad, or email. Great calls to action help your audience finish the sentence, "I want to … ," so testing your CTA can help you understand the intent of the user on your page.
For example, this exit intent pop-up form from Vahdam could test different copy on the button to see what gets more form submissions.
Product image type
Product image testing determines whether practical or lifestyle imagery drives better conversions for your specific audience. Testing your product image type can help you understand the drivers of your product's conversions. Some products are more practical, and benefit from simple images that highlight features, whereas others are more lifestyle-driven and benefit from first showing products in the context of use.
For example, Blender Bottle could experiment with a landing page that shows a lifestyle photo, such as a person in professional attire leaving the gym on their way to the office, before showing the straightforward bottle photos with features. Zalora increased checkout rates by 12.3% by optimizing their product image presentation strategy.
Pricing and discounts
Pricing tests require careful consideration but can yield significant revenue improvements when executed properly. Pricing can be difficult to test from a technical and customer perspective. Most website A/B testing tools don't offer the ability to test prices. And these tests run the risk of annoying customers who purchase it at the higher-tested price and then learn someone else purchased it for less. However, Shopify apps like Intelligems do allow for price testing.
Alternatively, testing discount codes can be an effective way to get similar learnings. For example, a brand could roll out two marketing campaigns targeting the same audience, with the same ads, but a different discount offer, such as 25% versus $25 off. They could see which performs better based on both click-through rate and conversion rate. Xerox improved conversion rates for returning visitors by 60% through strategic discount testing.
Element removal
Element removal testing uses subtraction to improve focus and conversion rates on key pages. An A/B test can be addition by subtraction. If a website has lots of different options for shopping or navigation, marketers will sometimes test hiding an option and seeing the effect on conversion.
For example, LOLA tests the effect on purchase conversion rate of removing links to its blog, The Spot, from its navigation:
Key considerations before A/B testing
Before diving into A/B testing, successful campaigns require proper planning and setup to ensure reliable results.
Goal definition: Clearly define what success looks like for your test. Whether it's increased conversion rates, higher click-through rates, or improved engagement, having a specific metric in mind guides your entire testing process.
Hypothesis formulation: Develop a clear, testable hypothesis that predicts not just what will happen, but why. This foundation helps you design meaningful tests and interpret results accurately.
Controlled environment setup: Ensure your testing environment eliminates external variables that could skew results. This includes consistent traffic sources, stable website performance, and avoiding major promotional periods.
Data analysis requirements: Plan how you'll measure success before launching your test. Determine sample sizes needed for statistical significance and establish clear criteria for declaring a winner.
How to conduct an A/B test
- Form a hypothesis
- Create test variations
- Select an audience
- Run the test
- Analyze the results
Running A/B tests is a systematic process. Each test follows these five steps:
1. Form a hypothesis
A good A/B test starts with a theory about how to improve performance. This theory can be based on existing data or it can be based on your opinion. To turn your theory into a hypothesis, state it in the form "I believe (making X change) will lead to improved (performance in Y metric)."
For example, "I believe enlarging the main product image on our product landing pages will lead to improved conversion rate."
Hypotheses don't need to specify the amount of improvement—they only need to be stated directionally.
2. Create test variations
This can be done in an ads manager, email platform, or website A/B testing tool. Variations should be labeled descriptively for easy analysis later. For example, instead of titling a new ad variant "Variant B," title it "Variant B—Emotional CTA."
3. Select an audience
A/B tests can be served to an entire audience or to a subset of your audience. On a website, for example, serving to the entire audience would mean half your website visitors see the original site and half see the new version you're testing. However, you might choose to only serve the new version to 25% of your audience, or only target visitors from Canada (in which case, half of Canadian visitors would see the original and half would see the new variant).
The right audience for you depends on what group you believe your hypothesis applies to and how quickly you'd like to gather enough data to make a conclusion.
4. Run the test
Typically, marketers will run any A/B tests for at least two weeks to ensure a test's success. This allows for enough time to account for any coincidences or fluctuations, such as customers acting differently on a weekend. Make sure not to run multiple A/B tests on the same webpage or audience at the same time or you could cloud your test results.
5. Analyze the results
When analyzing an A/B test, you are looking for a statistically significant result. This is a data formula that indicates that the result you're seeing is reliable and not the result of a small sample size or coincidence. The concept behind it is that if you have a small audience, you need to see a big difference in performance to make a conclusion. But if you have a large audience, even a small difference in performance can be conclusive.
Tools like Google Optimize will calculate statistical significance for you. For other results, you can use a statistical significance calculator.
Once you've analyzed the results and shared them with your team, you're ready to prepare your next test.
A/B testing examples FAQ
Why is A/B testing important for ecommerce businesses?
How long does an A/B test typically last?
What are some best practices for conducting A/B testing?
Create a clear hypothesis with predicted outcomes
Test one element at a time to isolate variables
Ensure 95% statistical significance before declaring results
Run tests for minimum 1,000 conversions per variation
Account for seasonality and external factors in timing