For online retailers, product images are paramount. Imagery must help make up for the fact that the buyer is sitting behind (or holding) a flat screen and cannot factor tactile experience into the purchase decision. Imagery possibilities can also be endless: Do you go with models or no models? 360-degree shots or static? Product videos? Which thumbnail is the best default? Experimenting with how products are displayed on your website is thus a very valuable and lucrative thing to A/B test.
While all of these questions make for excellent test ideas, the hours and dollar signs attached to exploring each type of imagery are nothing to scoff at. So when testing product imagery, it’s a good idea to experiment on a small subset of products first before going hog-wild with a new display across your entire inventory. Chrome Industries, a San Francisco based retailer of cycling bags and apparel, tried this while they explored new ways to display their products. Their test ended in a wash, but was actually an incredibly lucrative win.
The Chrome team picked one product to experiment with: their Truk shoe. The imagery in question was a product lifestyle video of the shoe versus a static image of a model wearing the show. Measuring which image type led more visitors to successfully purchase a pair of Truks would help them determine whether to commit more budget toward video development.
Here are the variations they tested:
They ran the test on the “Shoes” category page instead of the “Truk Shoe” product page itself since the category page has higher traffic.
Using Optimizely, they measured the percentage of visitors from the category page to the Truk product page, the percentage of visitors who continued to checkout, and percentage of visitors who successfully ordered. After letting the test run for just under three months, the results were something of a wash, actually good news in this case. Users visited the Truk product page 0.5 percent more with the image, continued to checkout 0.3 percent more with the video, and successfully ordered 0.2 percent more with the video.
If anything, the video slightly edged out the static image, but because producing video involves a much higher investment from Chrome than the images, the verdict is actually a clear vote against the added production cost. Now knowing know that video assets are not necessarily a silver bullet, the product team can further investigate why rather than ramping up a full blown video initiative that isn’t guaranteed to return on its investment.
If the team does choose to test video down the line (lifestyle-oriented video against product-oriented video perhaps), they can at least be confident that there’s little risk in running the follow-up test, since they’ve proven that video won’t hurt conversion. They may also decide simply to allocate their energies elsewhere and experiment with optimizing different portions of the site entirely, where there may be bigger unrealized gains awaiting.
Have any of your tests turned out to be washes and therefore wins?