Optimizely Blog

Grow your Optimization and A/B testing skills


Download our FREE Testing Toolkit for A/B testing ideas, planning worksheets, presentation templates, and more!

Get It Now


“Negative test results are good. In fact, they’re the reason we test at all.”

– Isaac Gerber, Senior Manager of Analytics and Optimization, Wiley

Seems counterintuitive, especially when we typically associate ROI, revenue, and increased conversions with positive test results. But negative results can be just as impactful. In fact, Isaac recently uncovered a strategy for turning negative results into big wins that generate a tremendous amount of executive buy-in for his testing program – more on that below the fold.

If you like bold strategies and, like Isaac, aren’t afraid to try something new, you’re in the right place. We recently asked experts from top companies around the globe to share one unexpected tactic that has propelled their testing program to success. Here are four fearless pieces of advice from testers at Visa, Wiley, TripAdvisor, and Move, Inc.

1. “Don’t stress too much about the first test. Instead, use it to calibrate.” – Vinod Kartha, Director of Digital Products, Visa

What if your first test wasn’t actually your first test at all?

Vinod Kartha is a leader passionate about driving change at Visa and one of his primary focus areas is optimization. Vinod’s advice is something he’s learned first-hand in building out the company’s optimization program: the first test is to calibrate; the second test is the real first experiment. This advice rings true for many types of “firsts” – the first test in a series, first test on a key page or element, or even your first test overall. When you’re diving into uncharted waters, building in the flexibility to make changes to your initial plan can be hugely beneficial.

This flexibility helps Vinod’s team build stronger experiments by answering important questions upfront, and then quickly altering course when necessary. Some of these questions include:

-Is our hypothesis wrong or incomplete?

-Is the test geared toward the right audience?

-Is there an issue with the team or process in place to support the test?

Armed with these answers, the team is able to quickly recalibrate and come back with a stronger second test each time.

Often, teams want to wait for the “perfect conditions” to run the first experiment, says Vinod – some examples might include higher page traffic, more supporting data, or more input from outside teams. In fact, the perfect conditions may never occur. What’s more important is to dive in and get started. The process may be a bit messy; it definitely won’t be perfect. But the learnings you gain simply by getting something out the door will help you better calibrate to get the most out of each consecutive experiment.

2. “Celebrate optimization in a big, creative way.” – Suma Warrier, Group Manager of Optimization and Experimentation, Move, Inc.


A screenshot of Suma’s optimization newsletter.

What if you could generate more enthusiasm and buy-in for important stakeholder and peers across your company, while also providing transparency into your team’s work?

Suma Warrier distributes a “Growth and Optimization” newsletter to teams across Move, Inc. on a quarterly basis. The newsletter shares a brief overview of the tests her team ran, as well as results and other learnings. Readers and stakeholders can also click through links provided in the newsletter to learn more about each specific test. The newsletter also provides clear visibility into annual testing goals and tracks monthly progress toward those goals.

According to Suma, this format has been a highly effective way to evangelize her team’s work across the organization, and get her peers excited and invested in the testing process.

3. “Set goals around consistency and stick to them.” – Brion Hickey, Former Director of E-Commerce, TripAdvisor

What if you could uncover more optimized pathways by simply testing more often?

Brion Hickey has managed optimization programs at companies large and small, most recently at TripAdvisor. His advice for building out a strong optimization muscle at any organization is to set goals – not only around ROI but also around consistency.

First, Brion recommends setting a goal that makes sense for your optimization program’s maturity level – for example, one test per week on key pages. Then, the most important step is to constantly hold yourself accountable to that goal. It sounds simple enough, but Brion says regular testing is one of the most surefire (and surprisingly under-utilized) ways to uncover more optimized pathways.

Rather than running a few tests on a page and calling it “optimized”, constantly challenging yourself and your team to question the status quo will help uncover big opportunities for improvement that weren’t even on your testing roadmap in the first place. Plus, Brion says, adding clear goals around consistency will help bake in optimization in as part of your team’s day-to-day workflow.

4. “Quick pivots can turn losing tests into wins.” – Isaac Gerber, Senior Manager of Analytics and Optimization, Wiley

Rather than throw a losing or inconclusive experiment out the door, what if you could quickly pivot on the learnings to get one of your biggest wins yet?

Isaac Gerber manages testing across Wiley’s products, including WileyPLUS, an online platform for teachers and students. The primary sales season for WileyPLUS is late summer, right before school programs start. Because of this limited window, Isaac has a very tight timeframe during which to optimize in order to maximize purchases.

This tight window has helped him master the skill of turning tests with negative results into potential winners. Last year, for instance, one of Isaac’s biggest experiments of the season totally tanked – resulting in a significant decrease in purchases at a critical time in Wiley’s e-commerce cycle. Rather than throw in the towel on the experiment and stick with the original experience, Isaac used the test data to inform a new hypothesis. This next test performed significantly better than both the original experience and the first variation.

According to Isaac, this quick pivot is responsible for the team’s success in capturing most of the sales season and maximizing revenue from the fall campaign. Not only that, Isaac says that sharing “the story of the failed experiment” has helped him better illustrate the value of quick, continuous optimization cycles to internal stakeholders and peers. In fact, Isaac found that sharing negative results was actually an easier way to show people the value of testing – and has gotten more executive buy-in using this strategy than ever before.

What bold optimization strategies do you plan to put into action this year?

Share your plans or what’s already worked for your team in the comments. For more inspiration and an in-depth presentation from each expert featured here, be sure to check out the session recordings and slides from The Optimizely Virtual Experience.

Optimizely X
comments powered by Disqus