diagram

I’m Becca Bruggman, Optimizely’s Experimentation Program Manager. My job is to make sure we are “drinking our own champagne” and run a best-in-class experimentation program.

This is the fourth installment of a six-part series designed to help you run a best-in-class experimentation program. In the series we are covering everything you need to build your program, develop it and have it running efficiently as well as create a well-oiled machine and make it visible.

In a previous blog post, I shared ideation tactics that should have you swimming in experiment ideas from all parts of your organization! Now, we need to figure out how to prioritize all those experiments and ensure the entire organization is creating high-impact experiments. 

A framework for prioritization can be helpful when you have a high velocity of experiments. It is also helpful when you have a high number of experiments to run in the same area of the site as you want to ensure transparency across the organization for trade-offs that are being made. 

At Optimizely, we don’t have a hard framework for prioritization given the overall number and types of experiments we launch,  the number of dedicated resources we have in place and program visibility (more on that in a later post!). 

To support our lightweight prioritization needs, Optimizely runs a weekly meeting called Experiment Review to ensure experiments are ready to be launched from a hypothesis, metrics and set-up perspective.

For me, there isn’t a hard and fast rule for when you need a prioritization framework. However, if you’re running your program in an iterative manner and you start to see higher velocity of experimentation, especially in key places, you will start to run into potential collisions. When this occurs you will need a way to share with your team how you determined which ideas are  prioritized. 

Arriving at this point is a good thing! It means your team has a ton of ideas and enthusiasm for experimentation. This point often happens sooner if you have a long lead time for launching an experiment in terms of the amount of time to build, QA and launch. To allow you to move quickly into a stricter prioritization framework, I recommend getting your team into the habit of scoring experiments early so that when the time comes that prioritization of certain experiments over others needs to happen, you will already have that institutional muscle memory.  

When needing to do experiment prioritization, having a scoring and voting structure can help with democratizing the process. This allows everyone to voice their opinion on what they think would have the biggest impact. If you’re using Optimizely’s Program Management, scoring is built directly into the system:

chart

This can also be included in an experiment roadmap within a spreadsheet. You can do this by including scoring columns next to each experiment idea.  This is a great option if you’re just getting set up and is included in the experiment roadmap template from the last blog post.

table, calendar

Scoring in the Experiment Roadmap Template

 

Having a prioritization and scoring framework is especially helpful when you have an individual team model (i.e. you get ideas from a lot of places and a single group prioritizes and builds the experiments). As you’d be getting a high number of ideas from across the organization, having a way to communicate back how the core team that does build and QA prioritized certain experiments over others, and what’s coming next, will aid with visibility, trust and transparency. If people feel like their ideas are going into a black hole, they will be less likely to submit more in the future. I’d recommend whatever prioritization framework you decide on and the experimentation roadmap you create is socialized across your organization and visible to everyone on an ongoing basis. 

For a Center of Excellence, and other more distributed program models, having a framework that  teams can use to balance experiments against other work-in-flight can also be helpful. Often individual teams within a distributed model will have their own frameworks they use to prioritize other tasks against experimentation, but having guidance to provide is helpful as a starting point.

As I mentioned above, a method we use at Optimizely for both prioritization and up-leveling of overall experimentation best practices across the organization is having a weekly meeting we call Experiment Review. This meeting has evolved over time, originally starting as only for experiments on the Product team and optional, evolving to being required for all Product experiments and now required for all Product and Marketing experiments.

In this meeting, anyone looking to run an experiment can come to share their hypothesis, metrics and experiment set-up to ensure they are approaching the experiment in the best way possible. I have the person who submitted the idea or who is looking to get it prioritized and launched present the idea to the group. This often looks like ensuring the hypothesis is well-written, using something similar to this framework:

A hypothesis is a prediction you create prior to running an experiment. The common format is: 

If [cause], then [effect], because [rationale].
In the world of experience optimization, strong hypotheses consist of three distinct parts: a definition of the problem, a proposed solution, and a result. (Source

It’s also important to ensure the metrics that are being tracked align to the hypothesis and that the type of campaign is the best approach for proving/disproving the hypothesis. Depending on the experiment, we will sometimes review the design and give feedback. However, for Product experiments we have mostly moved away from this to allow time to focus on the experiment set-up itself and the Designers to focus on ensuring the look and feel align with our design patterns.

Once someone has come to Experiment Review to review their experiment, they can receive the green light to move forward with launching their experiment or have specific feedback they need to incorporate below launching.

Experiment Review serves multiple functions:

  • Lightweight gating and prioritization for experiments 
  • Ensuring a high quality bar is met with experiments being launched
  • Up-leveling hypothesis thinking and experiment set-up across the organization
  • Visibility for what is about to be launched

Any chance I get, I will encourage people across the organization to attend Experiment Review. I use it as my “call to action” in communications and presentations. At Optimizely, experiment review is especially impactful as it gives our Go-To-Market (Sales, Success, Marketing) teams insight into what an experiment program looks like in practice. Beyond that, it gives everyone the opportunity to learn more about the program and voice their opinion about how the product can be improved for customers via experimentation.

I also have a running agenda and notes document that is visible to everyone at Optimizely, which I’ve included in my Experimentation Program Toolkit. In addition to plugging Experiment Review whenever I get the chance, I also share the agenda in advance of the meeting in a specific slack channel with a specific format, so everyone knows what’s upcoming this week:

graphical user interface, text, application, email

As you can see above, this channel can also be used as another place to share and celebrate wins!

You can find all templates noted above [here]. 

How does your team decide what experiments to run next? Comment below or tweet me @bexcitement.

See you in the next post on making all your awesome experimentation work visible!