Last week, we hosted a developer meetup at Optimizely NYC with engineers, product managers, and analysts from the New York area. Folks braved the rain to hear from John Cline, Engineering Lead – Growth at Blue Apron speak about their experimentation program.
John’s talk covered how Blue Apron overcame the challenge of setting up online experiments for a business that has a lot of offline operations. Below are some of the key takeaways from the talk along with a recording. The slides are also available on Slideshare.
For businesses that have offline operations, client-side testing may not be sufficient
Due to the nature of Blue Apron’s business, they have several backend jobs to handle things such as charging recurring subscriptions, turning a digital order into a physical package, sending recipe reminder emails, and many more. With client-side only testing, Blue Apron can’t run experiments to test out improvements to these flows. Optimizely Full Stack, which has server-side SDKs, provides the ability for Blue Apron to test experiments that impact their backend jobs. For instance, they tested changing the number of recipes available per week and saw an increase in order completions as a result.
Integrating Optimizely with your data warehouse enables better debugging and deeper analysis
Blue Apron sends their experimentation and product analytics data from Amplitude into their data warehouse which helped them to debug when experiments weren’t being activated correctly much faster. Additionally, since lifetime value measurement is so critical to Blue Apron’s subscription model, their analytics can use the data in their data warehouse to look at the long term impact of experiments.
Measuring your experiments in real-time can help avoid running harmful experiments for too long
Using Optimizely Stats Engine which displays experiment results in real-time, Blue Apron discovered quickly that a redesign was leading to worse performance. Because they were able to view the data instantly, it allowed them to shut down the experiment much faster.
Having a plan for managing technical debt is key
For Optimizely Full Stack, typically experiments are set up by adding conditionals logic to application code. This can potentially lead to technical debt if not managed correctly. John recommended setting up a regular cadence (e.g. quarterly) to remove non-active experiment code and switch the code to the winning experiments.