Showing posts in “Using Optimizely

Stats with Cats: 21 Terms Experimenters Need to Know

Stats cat

Statistics are the underpinning of our experiment results—they help us make an educated decision on a test result with incomplete data. In order to run statistically sound A/B tests, it’s essential to invest in an understanding of these key concepts.

Use this index of terms as a primer for future reading on statistics, and keep this glossary handy for your next deep dive into experiment results with your team. No prior knowledge of statistics needed.

Statistics for the Internet Age: The Story Behind Optimizely’s New Stats Engine

Optimizely Stats Engine

Classical statistical techniques, like the t-test, are the bedrock of the optimization industry, helping companies make data-driven decisions. As online experimentation has exploded, it’s now clear that these traditional statistical methods are not the right fit for digital data: Applying classical statistics to A/B testing can lead to error rates that are much higher than most experimenters expect. We’ve concluded that it’s time statistics, not customers, change.

Working with a team of Stanford statisticians, we developed Stats Engine, a new statistical framework for A/B testing. We’re excited to announce that starting January 21st, 2015, it powers results for all Optimizely customers.

This blog post is a long one, because we want to be fully transparent about why we’re making these changes, what the changes actually are, and what this means for A/B testing at large.