Optimizely Blog

Grow your Optimization and A/B testing skills


Download our FREE Testing Toolkit for A/B testing ideas, planning worksheets, presentation templates, and more!

Get It Now

winSometimes, testing isn’t as simple as winning or losing. In fact, there are times when a flat or losing result can lead to big insights. In these cases, the difference between learning something new and a failed test lies in the analysis—in refusing to take a result at face value.

A great example of this principle emerged while working with one of our clients.

The goal of testing was to increase conversions, as measured by form submittals. Before testing, the quote request form was placed several clicks beyond the homepage. Placing the form on the homepage, we reasoned, would help to increase these conversions. This is a sound idea, based in best practices: Reducing the path to conversion would increase the number of form submissions. Less friction, shorter funnel, clear win.

Unfortunately, this seemingly obvious win actually produced a confounding loss—a more than 20 percent reduction in overall form submittals. Though initially disappointing, further analysis revealed an important insight: The majority of visitors to the homepage and test experience were new—as opposed to returning—visitors. New visitors are most likely in the research phase. Returning visitors, however, have already done a bit of research and are closer to the moment of conversion. It was a distinction, we decided, that was worth testing.

Using custom segments and cookie targeting in Optimizely we were able to run the test again, this time isolating it to returning visitors. The reasoning here—besides removing an overpowering segment from the test—was that returning users were more motivated to submit their information and would thus be more receptive to seeing a form on the homepage.

This time, the test won decisively. Showing a form on the homepage to returning visitors produced a 48 percent lift among this segment—the most valuable in terms of conversions.

There is a lot to learn about testing in this case:

1. The Importance of Analysis

First of all, it clearly illustrates the important relationship between learning and winning in testing. Obviously, we all want to win, and it’s through deep insights that the best, most reliable wins are produced—not just once, by chance, but over and over again.

2. The Value of the Iterative Approach

It also highlights the importance of an iterative approach to testing. Big, dramatic changes can produce huge, surprising wins—but they can also produce confusing losses. By taking several small steps towards a big change—and testing each one—it’s possible to gain a better understanding of why a variation is an improvement or failure.

3. The Power of a Tool That Encourages Experimentation

Having a testing tool that makes building test variations quick and easy is critical to this iterative approach. Unless dedicated developer support is available, an intuitive, WYSIWIG interface for building tests is essential. Obviously, as the case above illustrates, having the flexibility to perform detailed analysis and advanced segmentation is also important.

When choosing a split testing tool, there are many questions to ask to ensure it’s right for your program (here’s a list of 10 of them). But regardless of the tool you choose, it’s important to develop a well-reasoned hypothesis, collect enough data for in-depth analysis, and keep iterating on past results.

Learn how you can keep improving your testing process and culture by downloading our white paper, The 5 Stages of Testing Ideation.

Optimizely X
comments powered by Disqus