In the long-term, your A/B testing process is one of the most important factors driving performance of your conversion optimization efforts.
I want to share three tips that will make your A/B testing process more impactful. We use these techniques on daily basis to convince companies to invest in CRO, spot real customer problems, and run long-term optimization programs more efficiently.
As the following case studies reveal, there are huge wins to be had from thinking big and being open to questioning the status quo. There are also important revelations lurking in smaller tests that can point the way to a major redesign. And sometimes testing is the only way to find true north amidst the chaos and confusion of major changes.
E-commerce is a challenging industry; a saturated market and mass availability of goods to consumers means that successfully capturing customers has become less about the products you offer and more about the experience you deliver to each user.
In this post, we’ll share tips sourced directly from our community of optimizers about how to best optimize your e-commerce experience for every funnel stage, across every device, taking into account both conversion best practices and design guidelines.
Slack is on fire. The company has record-setting growth metrics. It is adding $1 million dollars of ARR every 11 days. As of April 2015, over 750,000 people use the team messaging app daily. The entire story looks quite impressive on charts. And it looks equally impressive through the eyes of Bill Macaitis, Slack’s Chief Marketing Officer.
Marketers can learn a lot from Slack’s approach to marketing, especially their laser focus on creating the highest quality user experience possible — and measuring it. I’ll share a few lessons marketers can takeaway from Bill’s AMA.
The story of Phineas Gage is one you may have heard in a college neuroscience class. While setting a powder charge in a rock outcropping, Phineas’ tamping iron unexpectedly sparked. The explosion propelled the 3 ½-foot iron bar into Phineas’ head—in through his left cheek, through his frontal lobe behind his left eye, then out through the top of his skull. Somehow, he survived.
So why would I write about this topic on a blog about optimization? Well, today’s marketing and product professionals would be wise to incorporate the “Phineas Gage” persona into their optimization strategies because his post-accident behavior is shockingly similar to the that of the average web visitor.
Andy Nelson manages the Growth Marketing team at Moz. In this interview, Andy shares great tips for starting out with A/B testing, why he’s excited about recent strides in statistics, and what he hopes people will learn from his session at Opticon 2015.
South by Southwest Interactive was good to the growth marketer this year. Sessions at this mega-conference/party are a gamble — there are so many to pick from so some inevitably turn out to be fluff while others exceed your wildest expectations.
In this post, I’ll share insights from a few of the growth marketer sessions I attended.
If you’re working on optimizing your conversion rate, chances are, you’ve already done some testing on your website. Maybe it’s A/B testing or multivariate testing, or maybe you’ve run a heat map or two. These are all tools available to you to deliver the best experience possible to your visitors. In this article we’re going to talk about another one, user testing: what it is, and how you can use it to come up with really good hypotheses for your experiments.
Imagine you set out on a road trip. You packed the car, made a playlist, and set out to drive 600 miles—but you don’t actually know where you’re headed. When you arrive at a destination, and it’s not at all what you imagined it would be.
Running an experiment without a hypothesis is like starting a road trip just for the sake of driving, without thinking about where you’re headed and why. You’ll inevitably end up somewhere, but there’s a chance you might not have gained anything from the experience.
In this post, we’ll show you how to craft great hypotheses, how they fit into your experiment planning, and what differentiates a strong hypothesis from a weak one.