Stats with Cats: 21 Terms Experimenters Need to Know

Stats cat

Statistics are the underpinning of our experiment results—they help us make an educated decision on a test result with incomplete data. In order to run statistically sound A/B tests, it’s essential to invest in an understanding of these key concepts.

Use this index of terms as a primer for future reading on statistics, and keep this glossary handy for your next deep dive into experiment results with your team. No prior knowledge of statistics needed.

Why An Experiment Without A Hypothesis is Dead On Arrival

Experiments without hypothesis are dead on arrival

Imagine you set out on a road trip. You packed the car, made a playlist, and set out to drive 600 miles—but you don’t actually know where you’re headed. When you arrive at a destination, and it’s not at all what you imagined it would be.

Running an experiment without a hypothesis is like starting a road trip just for the sake of driving, without thinking about where you’re headed and why. You’ll inevitably end up somewhere, but there’s a chance you might not have gained anything from the experience.

In this post, we’ll show you how to craft great hypotheses, how they fit into your experiment planning, and what differentiates a strong hypothesis from a weak one.

How to Prioritize Your Test Ideas and Other Critical Questions

Kyle Rush

When I’m not running experiments on Optimizely’s conversion funnels, I love to interact with the optimization community. GrowthHackers has one of the best communities out there and last week I hosted an Ask Me Anything (AMA). The questions were very high quality and covered topics like running multiple tests at the same time, how to overcome technical hurdles, how multi-armed bandits can be helpful, what to do with inconclusive tests, and more.

If this piques your interest, have a read through the questions and, of course, continue to ask me anything.

Statistics for the Internet Age: The Story Behind Optimizely’s New Stats Engine

Optimizely Stats Engine

Classical statistical techniques, like the t-test, are the bedrock of the optimization industry, helping companies make data-driven decisions. As online experimentation has exploded, it’s now clear that these traditional statistical methods are not the right fit for digital data: Applying classical statistics to A/B testing can lead to error rates that are much higher than most experimenters expect. We’ve concluded that it’s time statistics, not customers, change.

Working with a team of Stanford statisticians, we developed Stats Engine, a new statistical framework for A/B testing. We’re excited to announce that starting January 21st, 2015, it powers results for all Optimizely customers.

This blog post is a long one, because we want to be fully transparent about why we’re making these changes, what the changes actually are, and what this means for A/B testing at large.

French Girls Loves Optimization

Scotch Mornington

How many times have you seen Titanic? Enough to remember the moment Rose tells Jack to “draw me like one of your French girls”? Well, a group of iOS developers from Scranton, PA remember… and they created an app inspired by it.

The app has risen in popularity over the last year, surpassing 1 million downloads in July 2014. With A/B testing, French Girls’ lean team is turning the majority of those downloads into actively engaged, activated users. Here’s how they’re doing it, lessons they’re learning along the way, and why they named the app French Girls.