chapter 6

A/B Testing | The ABCs of Conversion Optimization

Would you implement a financial plan without a goal in mind, or without the ability to measure its effectiveness over time? I hope not. So, why would you sell web UI/UX design services to your clients without understanding their Key Performance Indicators (KPIs) and how you’re going to measure the effect your design changes have on them?

A/B (or multivariate) testing is the best way to do just that. This empirical approach to design is a great way to differentiate yourself from the crowd of others pitching run-of-the-mill redesigns. Through A/B testing, you can forge better, more strategic relationships with clients as you work together to refine and test hypotheses that contribute compounding value over time.

In this chapter, we’re going to dig into the factors behind successful A/B testing—what we at Rocket Code call performance optimization design—and explain how to deploy it as a high-value strategic competency for your business.

The power of “why”

Approaching ecommerce design with performance optimization in mind first is a beautiful thing. Why? Because the impact of every interface change can be tested and measured. No change is too small or too big. Anything is possible.

Being successful with performance optimization design, however, may require an evolution in your thinking about design. The key is to be deliberate about the questions you ask your clients and yourself— and the way to foster this evolved thinking is to start with why:

  • Why do you (or I) want to make this design change? 
  • What am I hoping to achieve from this design change? 
  • What’s wrong with the current design? 
  • What can this new design paradigm do better than the current one?

When you take this approach with clients, you can help them evolve their thinking about design too—and that’s when the real magic starts to happen.

While there are plenty of metrics that can be measured through A/B testing, the gold standard in ecommerce is revenue per visitor (RPV). It’s not conversion rate, and it’s not average order value. And definitely don’t build your testing around “vanity” metrics like click-throughs or email signups. Cut through the noise, and focus on RPV.

Loading up your toolkit

You’re changing the way your clients think about design, and positioning yourself as the expert—great. What’s next? Pick a tool.

There are lots of tools to consider. At Rocket Code, we prefer Optimizely because it allows us to serve two masters: one, we can dive in deep with our UX and engineering teams; and two, it has tools we can put in the hands of our clients as we help them build a culture of testing within their own organization. 

Once you’ve chosen a tool, it’s time to get to work! In our experience, there are several key steps in the process. Each one is important, so no shortcuts.

Step 1: Investigate the user journey

Start by looking at the key steps in the user journey. We advise evaluating the following aspects of the ecommerce checkout flow and looking for ways to make things easier:

  • Product detail page 
  • Collection template 
  • Cart 
  • Checkout 
  • Navigation

First, develop a comprehensive map of how a customer navigates the site, from initial page visit to successful checkout. From there, brainstorm ways to improve the journey and boost RPV. To stockpile ideas, you can draw on the client’s analytics, usability testing, surveys, and user feedback, or simply your own (or your client’s) instincts and insights.

Got a great map of the checkout flow, but short on testing ideas? Let’s talk about a powerful methodology you can apply to identify testing possibilities.

Step 2: Reduce the attention ratio

In each area of the user journey, look for ways to reduce the attention ratio. This is the ratio of actions you want the user to take—usually just one—to the total number of actions they can take. Research has shown that the more distractions there are around you, the more each piece of stimulation competes for your attention. Ask yourself and your client, “How many interactive elements are hampering our customers from taking the one action we want them to take?”

By mapping out the checkout flow, then applying a lens of attention- ratio optimization, you should have a healthy slate of potential experiments to run. Once you’ve amassed this list, you might be excited to get testing. But you can’t run 20 tests at once—and it’s definitely not advisable to do so, even if you could.

You need to prioritize those opportunities, but how? Although it might seem daunting, next I’ll outline a framework to do just that.

Step 3: Prioritize optimization experiments

Here’s how we prioritize plausible performance optimization opportunities for our clients:

  • First: high potential impact, low effort 
  • Next: high potential impact, high effort 
  • Last: low potential impact, low effort

Don’t recommend any low-potential-impact, high-effort tests. Trust me — your clients will thank you for it.

Similarly, skip mundane testing options like button colors in isolation. Instead, advocate a testing program that considers whole ecommerce interactions—things like add-to-cart interactions, navigation paradigms, search treatments, and collection page filters.

We believe in data and suggest you start there to further inform your prioritization — namely, with your clients’ web traffic data. Here are some specific analytics that we see as necessary starting points for effective prioritization:

  • How many visitors see a given interface? 
  • How much better would that interface need to perform (in terms of RPV) to be valuable for the client? 
  • Do I think I can achieve that result through a new design paradigm?

By using analytics to inform your recommended design changes, you neutralize emotions and let math do the talking. If your logic makes sense and seems achievable, you’re in a good position to earn the client’s go-ahead.

Step 4: Design and run your experiment(s)

So, you’ve selected your first test based on the results of your prioritization exercise. The final step before actually running the thing is to establish appropriate parameters to ensure useful results. You know the old saying, “garbage in, garbage out”? Let’s make sure that doesn’t happen to you.

Good experimental design is paramount to successful performance optimization design. Here are some key considerations for your experiments:

  • Don’t change and test too many variables at once. This undermines your ability to achieve true statistical significance, without which you won’t truly know which variant outperformed the rest. 
  • Without an adequate data sample, you’re not going to get statistically significant results. Know how long you need to run the experiment to achieve statistical significance. We suggest running each experiment between seven and 15 days, and no longer than 20 days. Running the test for at least a week will let you capture any variance between weekdays and the weekend. The timeframe should be determined in large part by the site’s traffic. Aim to get about 10,000 visitors through each experiment variant. 
  • Don’t be tempted to “peek” at the data before the test has achieved statistical validity. Biases or expectations may tempt you to end a test before it’s ready. Be strong. 
  • You also don’t want to run a test for too long. Be prepared to stop the test once you’ve obtained enough samples. This way, your data’s less likely to be affected by seasonal factors.

To simplify things, your testing tool of choice may be able to determine when statistical significance has been reached—let it do the heavy lifting.

Step 5: Measure and report your results

You’ve run your test, obtained enough data to get a statistically significant result, and found that your variant outperformed your control in terms of RPV. Great! But your analysis doesn’t need to end there. Although RPV is the most important measure, you may learn more by looking at your results holistically and seeing how other metrics have shifted.

Segmentation can also be useful. Although overall results may not show a huge difference between variants, one might more noticeably outperform among certain segments of site visitors. These kinds of insights can help you devise solutions that drive more revenue among those segments, and give you ideas for future tests. Keep in mind, though, that the data within a segment also needs to have reached statistical significance or you can’t trust the results.

Building a culture of testing

Being successful with performance optimization design means creating a culture of testing with your clients. You can do this by demonstrating the value of testing—being crystal clear about what you’re going to test and why, and the specific potential gains from implementing a superior treatment. It’s also crucial to get buy-in from across the client organization—this will ensure that everyone’s on the same path.

The proposition is still elusively simple: Your clients will be more successful if they can make more money from each of their site visitors. By designing smart tests that identify opportunities to enhance revenue, and getting your clients on board with data-driven hypothesis testing as an alternative to gut-based or ego-driven solutions, you’ll better set your clients—and yourself—up for success.


About the author

Ray Sylvester is a content specialist at Rocket Code, where they obsess about performance-driven interfaces, rock-solid engineering, and complete user experiences.

Next chapter

7. SEO | Help Clients Improve Their Search Rankings

9 min

Grow your business with the Shopify Partner Program

Learn more