A/B Testing Can Make a Huge Difference in 2019
Your car is blue, your favorite shirt is blue, your dog’s name is Blue. But is that the right color for your call-to-action button?
This may seem like a tiny detail...but small things can make a huge difference in how successful your marketing is. That’s why A/B testing is so important.
A/B testing pits different versions of one element (like your website layout, mobile ad, design, email subject line, or copy) against each other to see which gives you the best results.
Basically, your marketing goes mano-a-mano against itself. For example, you could create 2 (or more) versions of the same landing page with different layouts, randomly show the different versions to visitors, and see which one performs better.
What things can you test? Almost anything that might affect performance. You could switch out one word in your call to action, move a button from the right or the left side, or change the background color of your design.
You could also test whether beard length affects click-through rate, like the clothing company Betabrand did.
They created 5 versions of the ad, changing just the beard style. Then they ran those new versions and the original ad at the same time, to the same audience.
They compared the results and #6 won by a huge, um, beard. It had a 79% higher click-through rate than the other ads’ combined average.
What did Betabrand learn? If their ads feature men, they should be unconventionally beardy to boost click-through rates.
Great, let’s slap a beard on a call-to-action button. But wait...it takes time to create a well-designed test that will generate reliable results. You need to hone in on your objective, come up with a hypothesis, create your “variants,” and start testing and calculating the results.
First, we’ll look at honing in on your objective, or the outcome you’re trying to improve – like getting more signups from your email marketing.
Betabrand’s objective, for example, was to improve the click-through rate (aka outcome) of their ad.
The beards helped them reach this objective, and they kept the ad design the same during testing.
Next, you should come up with a hypothesis, or theory behind what will help you reach your objective.
Look at your original ad or layout or button or whatever you’re planning on testing. You probably already have a nagging feeling that something in there could change for the better. Follow that hunch and turn it into a hypothesis.
For example, Betabrand hypothesized that beard length would affect click-through rates. But they could have focused on how the order of the copy would affect click-through rates. Or they could have looked at what color shirt to use.
After your hypothesis comes creating your variants, which is just an awkward word for different versions of one thing.
Your original version would be called the “control variant.” Take the one thing in your control variant that inspired your hypothesis, and come up with different ways to tweak or change it. Then turn those tweaks into new variants/versions to test.
To get the best, non-biased A/B testing results, make sure you only change one thing from your original version.
Why? Let’s say Betabrand tried out different beards, copy lines, and logos in one A/B test. How would they know which of these factors triggered the better click-through rate?
By only changing the facial hair (and sticking to their hypothesis), they could confidently make the right improvement in future ads.
If they really wanted to test two or more things they would run two or more separate tests, making sure to test only one variant per test.
Finally, you’ll test and calculate your results to find out which one will help you reach your objective best.