After a decade in Product Development at Shutterstock and Beatport, I’ve performed a ton of A/B tests. Some were great. Others weren’t so great. But each one taught me something valuable, and each one made me better at what I do. I’m surrounded by some really smart people, but we’ve still had to stumble our way through some of these hard lessons because what seems intuitive is not always your winner. With that in mind, here’s something I wish I had started with years ago: a pragmatic list of lessons learned the hard way.
No two products are the same, so the tests we’ve run won’t work for your business. So instead of including details around specific tests, this list focuses on core principles that will help others succeed in testing. Have fun!
1. It’s a hypothesis. Don’t fall in love. Most tests don’t turn out the way you plan them. There’s a roughly 70% chance that you are wrong. Try lots of ideas quickly and cheaply. Consider excluding difficult browsers like IE and excluding customer segments that introduce lots of edge cases. Do what it takes to get a test out fast.
2. Test small changes. Don’t spend months building a test just to throw it away when it doesn’t work. If you have to spend a long time creating it, then you’re doing it wrong. Find the smallest amount of development you can conduct to create a test based on your hypothesis. One variable at a time is best.
3. Tell the story of the customer with qualitative research. It’s difficult to narrate the customer psychology surrounding your test without having some qualitative research to back up your metrics. It’s really not as hard as it sounds. Tools like Qualvu.com are great for helping you measure your qualitative research and turn it into something actionable. The key to great testing is understanding when to use quantitative and when to use qualitative. Qualitative analysis is great for generating test ideas or for telling the story of a winning test.
4. UX details matter. It’s the details that lead to big wins. Many of our winning tests were based on a small detail that none of us were able to articulate prior to the test. Pre-selecting a box, using a button instead of a link, slightly altering the visual priority on a page or making a minor copy change — these are the things that drive winning tests and revenue gains.
5. You can’t test strategy. Know what to use A/B testing for and what not to. If you have a new product release that is meant to fit a new customer segment, don’t test your way into it. That path only leads to frustration. Release the product and follow up with A/B tests later to optimize the product. If you are concerned with how customers might react to a new feature, consider a “smoke test.” This is where you release it to a small customer segment for a short period to see if there are strong negative outcomes that would prevent you from releasing the product and/or potentially give you an opportunity to improve it before a full release.
6. Don’t neglect site performance. Performance testing should be considered part of your design and optimization. Even a small page load increase can foul what would otherwise be a winning design. Page weight and load should be tested alongside other tests. I’ve seen winning tests lose on performance issues alone.
7. Leave your visual design principles at the door. Don’t confuse beautiful design with what converts. Ultimately, the two concepts don’t have much to do with one another. Designers with conversion experience know that it’s function over form. Over the years, we’ve designed some beautiful pages that have lost a lot of money. This point gets exacerbated when you’re testing internationally because markets and cultures respond differently to the same presentation.
8. Price Framing is crucial. Pricing tests are won and lost on framing. How customers see the price point on the page in relation to the other price points often matters more than the actual price point. If you are selling something on the page for $9.99, it can be highly effective to have a red herring for $29.99 that frames the lower priced item you wish the customer to purchase.
9. View many different health metrics. The metric you tried to move probably won’t tell the whole story. Plenty of very smart people have been puzzled by A/B test results. You’ll need to look at lots of different metrics to figure out what change really happened, and more often than not you’ll be incorrect. When a test fails, don’t give up. Instead, learn what happened (figure out, for example, which metric DID move) and use that to inform future iterations.
10. Instead of creating one control cell, create two. This simple testing principle helps read significance (when the controls flatten out) and also helps identify errors in data collection/methodology. If your identical controls are giving you very different reads, something is wrong with the test.