Anyone with A/B testing experience will tell you that producing winning tests is tricky to do consistently. Our friends at Qualaroo have plenty of experience helping companies scale up their marketing efforts with A/B testing and optimization.
Sean Ellis, CEO and Co-Founder of Qualaroo and Ryan Lillis, a Strategic Optimization Consultant at Optimizely, have an excellent five-step framework for setting up better tests.
Let’s dig in!
Committing to A/B Testing
Like any marketing or growth activity, A/B testing takes an investment of time and resources from you and potentially other members of your team. So why invest?
This is why you need to spend time on A/B testing … "92$ spend on acquisition for every 1$ on optimization" #winningtests
— DeAnna Sposito (@deannacmj) June 25, 2014
In short, because A/B testing and optimization will improve the effectiveness of other campaigns you’re running. If you spend money to drive traffic to your website, mobile application, or landing pages, your time will be well-spent leveraging optimization to improve the performance of those customer interactions.
“By focusing efforts and dollars on improving your conversion rates, you’re going to find that a lot more channels open up … that was key from my experience at Dropbox, LogmeIn, and Eventbrite.” —Sean Ellis
Now that we know we’ve established the value of successful testing, let’s discuss how to properly run an optimization program on your website or mobile app.
So tell me about this Framework …
Sean recommends thinking about optimization as a cyclical process that incorporates both quantitative and qualitative insights about your audience.
The next five steps will walk through how to reach an understanding of what people are actually doing on a website, and then move into an assessment of why they’re doing it.
Many testers will start their optimization process with a brainstorm of test ideas, like conflicting opinions or ideas for improving their homepage. Although it’s valuable to start thinking about what could be tested, this is an uninformed process that doesn’t necessarily lead to strong results.
It’s important to note that if you brainstorm and test randomly, you may not necessarily know why a variation won or lost against your original. You should anticipate that many tests you run will be inconclusive, or even lose against your original. Conducting research prior to running tests (those quantitative and qualitative insights) will help to ensure that even inconclusive tests are a learning opportunity.
Step 1: Find Your Best Opportunities
You should begin your testing research with looking at where to test. Use your analytics to uncover the following:
- Top 5 highest bounce rate pages
- Top 5 abandonment points in your funnel
- Top 5 most valuable pages to your business
The bounce rate pages and abandonment points are quantitative methods of finding good opportunities for running experiments—your analytics software should have reports for each.
The pages with the highest bounce rate signify a page where visitors aren’t finding what they’re looking for, or are frustrated by not being able to take an action that they want to.
For abandonment points, look at places on your site where you lose the most traffic.
Try Ryan’s tip: “Take a look at user pathing reports in your analytics. Look at a pathing report that goes backwards from your final conversion goal to understand what the steps were that many users took prior to reaching that page. You can start to understand what the reasons were that they ended up at that final conversion point.”
Another qualitative method of finding a high-priority test area is to assess which pages are most valuable to your business.
For instance, Ryan walked through what this might mean for an e-commerce site: “It might be a checkout funnel, but it might also be product pages in e-commerce. They are higher traffic, and also more qualified traffic, since they’re interested in the product, but aren’t convinced yet … You’ll want to run tests that encourage visitors to take that next step and ultimately convert.”
After you’ve identified your top pages for improvement, the best next step is to conduct research on those pages to understand what could be improved. This comes in step 2…
Step 2: Understand Visitor Needs
According to Sean, there are four key questions that you should be asking your visitors to understand their needs and motivations:
- Why did they come to your website?
- What stopped them from converting?
- Did they find what they were looking for?
- If they did convert, what almost stopped them?
“Engage with your customers to understand the decisions they’re making, why they’re doing certain things, why they’re not doing certain things.” —Sean Ellis
According to Sean, the first question in this lineup targets visitor intent. “If you can understand someone’s intent is, and you can help them realize that intent, you’re much more likely to drive a result than if you have to go out and build demand and drive interest.”
— Robert J. Munson (@RobertJMunson) June 25, 2014
To understand intent, look at intent on both the visit level and the page level. On-site survey platforms like Qualaroo allow you to assess motivation (intent) from your visitors at key points in a website funnel.
Sean also recommended adding exit surveys to the page right before a final conversion, or to pages with high bounce rates, to understand what causes visitors to drop off.
There are a host of other tools and questions that we’d recommend for conducting research on areas you’d like to test in the Guide to Building your Data DNA. Qualitative data can also come from other sources, like interviewing a customer for feedback on your website flow and taking notes on their reactions, or talking with your customer support team to understand where visitors have issues completing actions.
Step 3: Use Data & Insights to Inform Testing
Now that you’ve successfully identified areas to test and conducted research to obtain some qualitative visitor data, it’s time to lay the groundwork for prioritizing and running your tests.
According to Ryan, there are a few key tips for kickstarting your optimization program that you should consider:
Start small to get a win and validation for process: Tests that you can set up in minutes are key. Try out something like a headline or image change. The goal of this experiment should be to set up and run a test that can drive business impact and help validate the process to your team.
Test for impact, avoid ‘Meek Tweaking’: You’ll need to make big, fearless changes in order to detect a change. Minor tweaks are not usually helpful, unless they’re subtle changes backed by user research. Ryan’s reminder for these tests is: “If you’ve run a few minor tests and you haven’t seen much of an effect, it’s not necessarily a failure, but it could be that the tests you’re running are too small to affect user behavior.”
A more significant change could be a redesign, or a change to your checkout flow, breaking a large form into a longer flow. Whether it’s a good change or a bad change, you’re likely to be able to know what that effect was, and take action that much more quickly on your results.
Ultimately, Ryan recommends a mix of tests that are easy to set up. Smaller messaging tests can be helpful, but couple them with some more significant changes.
Your first ten experiments are all about building momentum for your team moving forward.
Step 4: Run Your First Tests
Now, let’s put your data and advice on getting started into action.
Sean compares running your first tests to a football coach scripting the team’s first plays: “You know you can’t script the first 100 plays, because ultimately, you don’t know what the other team is going to do to react. Similarly, [in optimization] you don’t necessarily know which tests are going to be successful or not, but you want to be very calculated in scripting your first ten tests.”
The First Ten Tests
- 4 Message Tests
- 4 “Aha Moment” Tests
- 2 Large Scale Design Tests
Message Tests: These are tests where you can message to address pain points, or clarify areas of confusion. Sean shared an example where he and his team improved downloads on a piece of software when they discovered that many users weren’t downloading because they didn’t believe the software was free. This was easily addressed with messaging, tripling the download conversion rate.
“Aha Moment” Tests: These tests are about key conversion milestones — where once the user takes that step, they are much more likely to become a valuable customer. For Qualaroo, Sean mentioned that visitors seeing their first set of survey results is very important for user retention and satisfaction. This could be watching an intro video, or something else entirely. Focus on getting your visitors to these moments faster.
Large Scale Design Tests: To avoid ‘Meek Tweaking,’ build a couple larger scale experiments into your top 10 tests. This could be rearranging the layout of a landing page to make it drastically different, or completely overhauling your sign up flow. These tests tend to be dramatically different, and can lead to larger breakthroughs in your testing than small changes.
Step 5: Measure Success
Now that you’ve started to run your experiments, it’s essential to carefully evaluate the outcome of each test and decide on the next steps. Ryan’s reminder is that the ultimate goal of an A/B test is to implement the winning variation, if there is one.
When examining your experiment results, you’re looking for a winning variation to show a lift with 95% statistical significance and 80% statistical power (these are industry standards when it comes to evaluating whether your results are valid.)
Learn more about how to compute the amount of traffic you’ll need for a test with the help of a Sample Size Calculator.
Keep in mind the following tips from Ryan and Sean when evaluating your test results:
- Measure impact through the entire funnel, not just at the page level. Just because you’re running a test on the product page doesn’t mean that test only has an effect on that page. Make sure that the change has remained consistent
- Bigger changes yield faster results. The larger a change you make, the faster you’ll get to statistical significance with your results, because the minimum detectable effect is larger.
- Different levels of statistical significance will speed up or slow down decision making. 95% statistical confidence is the number you’re looking for (the industry standard for A/B testing.)
In order to take action on the data coming in from your experiments, you’ll need to get the team’s buy-in. In Ryan’s experience, the higher level of statistical significance, the more comfortable your team will be with taking action on those results.
Remember, traffic is the currency for your A/B tests!
"The goal is to run as many high quality tests as your traffic allows." #winningtests
— Denise Chan (@denisechan26) June 25, 2014
Putting It All Together
Remember, optimization is an insights-driven process. If you’re optimizing successfully, you’ll be delivering continuous improvements on metrics that are core to your business’ success. Remember to build both qualitative and quantitative insights into the process of planning your testing strategy.
Now, it’s time to get started:
- Plan and run your first five to ten tests.
- Get your first win.
- Share the results with your team—even if the only outcome was that you just learned something really interesting.
- Start the cycle of continuous improvement.