What’s Next? Steps For Prioritizing Your Experiment Backlog


prioritizing test ideas

Prioritization is a critical skill to master when building out a testing program. It’s about making smart choices and applying discipline to the decision-making process.

In my experience helping companies build their test programs from scratch, as well as optimizing more mature programs, I’ve seen the benefits of adopting a rigorous prioritization scheme time and time again. Based on experience working with Optimizely customers, here are three crucial key steps to follow when bringing prioritization into your optimization strategy.

User Testing: A Pillar of Great Experiments


If this temple in Agrigento Sicily island in Italy was an experiment, then user testing would be one of its pillars .

If you’re working on optimizing your conversion rate, chances are, you’ve already done some testing on your website. Maybe it’s A/B testing or multivariate testing, or maybe you’ve run a heat map or two. These are all tools available to you to deliver the best experience possible to your visitors. In this article we’re going to talk about another one, user testing: what it is, and how you can use it to come up with really good hypotheses for your experiments.

Stats with Cats: 21 Terms Experimenters Need to Know


Stats cat

Statistics are the underpinning of our experiment results—they help us make an educated decision on a test result with incomplete data. In order to run statistically sound A/B tests, it’s essential to invest in an understanding of these key concepts.

Use this index of terms as a primer for future reading on statistics, and keep this glossary handy for your next deep dive into experiment results with your team. No prior knowledge of statistics needed.

Why An Experiment Without A Hypothesis is Dead On Arrival


Experiments without hypothesis are dead on arrival

Imagine you set out on a road trip. You packed the car, made a playlist, and set out to drive 600 miles—but you don’t actually know where you’re headed. When you arrive at a destination, and it’s not at all what you imagined it would be.

Running an experiment without a hypothesis is like starting a road trip just for the sake of driving, without thinking about where you’re headed and why. You’ll inevitably end up somewhere, but there’s a chance you might not have gained anything from the experience.

In this post, we’ll show you how to craft great hypotheses, how they fit into your experiment planning, and what differentiates a strong hypothesis from a weak one.

How to Prioritize Your Test Ideas and Other Critical Questions


Kyle Rush

When I’m not running experiments on Optimizely’s conversion funnels, I love to interact with the optimization community. GrowthHackers has one of the best communities out there and last week I hosted an Ask Me Anything (AMA). The questions were very high quality and covered topics like running multiple tests at the same time, how to overcome technical hurdles, how multi-armed bandits can be helpful, what to do with inconclusive tests, and more.

If this piques your interest, have a read through the questions and, of course, continue to ask me anything.

Statistics for the Internet Age: The Story Behind Optimizely’s New Stats Engine


Optimizely Stats Engine

Classical statistical techniques, like the t-test, are the bedrock of the optimization industry, helping companies make data-driven decisions. As online experimentation has exploded, it’s now clear that these traditional statistical methods are not the right fit for digital data: Applying classical statistics to A/B testing can lead to error rates that are much higher than most experimenters expect. We’ve concluded that it’s time statistics, not customers, change.

Working with a team of Stanford statisticians, we developed Stats Engine, a new statistical framework for A/B testing. We’re excited to announce that starting January 21st, 2015, it powers results for all Optimizely customers.

This blog post is a long one, because we want to be fully transparent about why we’re making these changes, what the changes actually are, and what this means for A/B testing at large.

French Girls Loves Optimization


Scotch Mornington

How many times have you seen Titanic? Enough to remember the moment Rose tells Jack to “draw me like one of your French girls”? Well, a group of iOS developers from Scranton, PA remember… and they created an app inspired by it.

The app has risen in popularity over the last year, surpassing 1 million downloads in July 2014. With A/B testing, French Girls’ lean team is turning the majority of those downloads into actively engaged, activated users. Here’s how they’re doing it, lessons they’re learning along the way, and why they named the app French Girls.

7 Tips to Improve Mobile App Onboarding


first time app user onboarding

Twelve—That’s the number of apps currently installed on my mobile phone that I haven’t used more than once. At one point, they caught my interest enough to install but now are just gathering dust and taking up screen real estate.

Chances are, you probably have at least a few apps on your phone that fit this bill too. Today, 80-90% of downloaded apps are used once and then deleted. That’s why everything that happens after someone launches your app for the first time is downright imperative. Here are some ideas product managers can test on their app onboarding flows…

Optimizing Content: How Kevy Writes More Without Writing Worse


Workspace station

Brooke Beach has a challenge common amongst many: producing a lot of content with limited resources without sacrificing quality.

Sound familiar?

Her marketing team has come up with a system that combines data from website analytics, marketing automation, and live chat to help create the right content for the right audiences. Intrigued as to how live chat contributes to this optimization equation, I talked to Brooke about how they go about it, and the impact it’s had on the business.