...

Why and When A/B Testing Doesn’t Make Sense

Posted in  Blog   on  October 24, 2016 by  Marc

It seems like everyone wants A/B testing these days. Unfortunately, it doesn’t always make sense. It might even be a waste of your time (and visitors). Don’t get me wrong, we love A/B testing. We do it all the time. But only when it makes sense for our clients.

Conversion Optimisation ≠ A/B Testing

We see a lot of people mixing up the terms ‘conversion optimisation’ and ‘A/B testing’. It’s a common and understandable mistake. But conversion optimisation is so much more than just A/B testing. It’s an end point of optimisation, never the start.

To make it clear, we like to use this equation:

Conversion optimisation = research + implementation (+ testing)

Yes, the testing part in this equation is in parenthesis. Testing is not always a necessary part of conversion optimisation.

You Need Enough Traffic and Conversions

I often talk to people who have heard that they don’t have enough traffic for conversion optimisation. Usually what they mean is that they don’t have enough traffic for A/B testing. But that doesn’t mean they can’t do conversion optimisation. Even if you only have a handful of visitors, you can optimise your site. You just stick to the research + implementation part of the CRO equation.

How Much Traffic Do You Need to Test?

I find it easier to think about the number of conversions you have, rather than in terms of your traffic numbers.

As a rule of thumb, you need at least 1000 conversions per month to consider testing. The reason for this is simple. If you have less than 1000, your tests will need to run way too long before they’ll reach statistical significance.

To draw valid conclusions from your tests, you need at least 200-250 conversions for each variation (i.e. 200-250 conversions for your control, or A version, and 200-250 conversions for your variation 1, or B version). If you only have 200 conversions per month, it would take 2 months to conclude the test. That’s a waste of time and visitors.

You Need Enough Conversions To Analyse The Segments

If you want to analyse the test in some segments, you need 200 to 250 conversions for both A and B versions. Analysing your segments is often interesting and will teach you a lot. Often, you don’t see an overall significant difference between your A and your B versions. But when you look at the segments, you might find a winner. You can then easily apply the winner for that specific segment and increase your revenue.

 

Let’s look at an example. This was an A/B/C/D test for a Belgian site that also had a lot of visitors from the Netherlands.

 

6

 

You can see that one version outperformed all the others for visitors from the Netherlands. Based on the IP address, we can easily show that particular version to visitors from the Netherlands. This way, our client maximises his revenue. This is an opportunity we would’ve missed if we hadn’t analysed the geographical segments. And to be able to do so, you just need enough conversions.

How (in some cases) You Can Test With Fewer Conversions

What if you don’t have 1000+ conversions per month? You can make it work with less (e.g. 500-1000 per month), but then you need to optimise your site for page goals, rather than for site goals.

Say you want to test the product pages on your e-commerce. The goal of that page is to make people add the product to their cart. There’ll be a lot more visitors adding an item to the cart than visitors finishing the entire checkout. Typically, you’ll lose about 50% of them in the next steps (cart and checkout). So the conversion you can measure here is the ‘adds to cart’, rather than the final conversion or transaction. You simply have a lot more conversions to analyse here.

The only downside with testing for page goals is that there might not be a causal relationship between your page goals and your revenue. Meaning that yes, version B might generate more ‘adds to cart’, but no, it won’t always generate extra revenue.

Barely Enough Traffic To Test? Go For Bolder Changes

If you barely have enough traffic and conversions to test, it doesn’t make sense to test every tiny change on your page. It will simply take too long before you’ll have tested them all.

There’s only one solution to that: test bigger bolder changes. They’ll move the needle more quickly.

What You Can Do If You Don’t Have Enough Traffic To Test

The beauty of the equation

Conversion optimisation = research + implementation (+ testing)

is that it also shows you what you CAN do if you don’t have enough traffic to test.

Optimise your site by doing conversion research and just implement the stuff that you discover in the research. If you had the traffic to test, you’d probably find (thanks to testing) things that don’t improve, or even hurt, your conversions. But you don’t have the traffic, so you just have to have the balls to implement all the findings from your research. Implementing all that stuff at the same time is going to move the needle, that’s for sure.

 

Don’t Test If You Haven’t Done The Research

Let’s look at that equation again:

Conversion optimisation = research + implementation (+ testing)

It’s not a coincidence that the words are in this order. If you want to do conversion optimisation, you need to do the research first. If you don’t have enough traffic to test, you definitely need to do conversion research. How else would you know what to implement? By guessing? That’s not going to work. You can’t just guess, you need to KNOW. About two years ago, we started working for a client who was already changing the product pages in order to try and increase his conversion rate. However, when we did the conversion research, we quickly found out they were losing more than $2,750,000 per year on one page in the checkout. And they weren’t even aware of it.

Even if you have enough traffic and conversions to test, you need to start with the research. Research has found that only 1 out of every 7 tests will generate a statistically significant result. That’s not much. When you do the research first, you can expect one out of every three tests to be statistically significant. That means that you’ll quickly make up for the time you invest and will grow your revenue faster if you test based on research.

Don’t Test Everything

One last time (I promise), the equation:

Conversion optimisation = research + implementation (+ testing)

After research comes implementation. In the research, you’ll discover things that just don’t need to be tested – even if you have the traffic. Some things are just no-brainers that can be implemented right away.

To help you decide what is a no-brainer and what should be tested, you can use this hierarchy of conversions chart.

7

When you do the research, there might be several possible areas of improvement:

  • Functional – e.g. you have technical problems on a certain browser
  • Accessible –  e.g. your checkout doesn’t display properly on mobile devices
  • Usable – e.g. your site is too slow
  • Intuitive – e.g. the checkout flow is not logical
  • Persuasive – e.g. you should improve your copy to persuade your visitor, add scarcity, etc.

The rule of thumb here is that the higher up the pyramid, the more sense it makes to test. The lower on the pyramid, the less sense it makes to test. If your site doesn’t work on a certain browser, for instance, you don’t have to test that. You just fix it.

By the way, you need to fix the bottom layers first before you can even start to think about playing around with persuasive elements. Creating scarcity can definitely increase your conversions, but it’s not going to help you that much if your checkout doesn’t work on Firefox.

Don’t Test Button Colours

Ugh, button colours. It’s such a cliché in conversion optimisation that I’m dedicating a separate paragraph to it. There seems to be a myth that there’s a button colour that’s best for every site. That myth comes from meaningless case studies where some site has tested their button colour and found that green was the winner and so everyone starts to implement green buttons.

This is nonsense. The best button colour is different for everyone. It’s mainly determined by the dominant colour on your site. Is red your main colour? Then a green button might work better because it will stand out more. Is green your main colour? Then red might work better. Again, because it will stand out more. It’s as simple as that.

We don’t test button colours. Ever. It’s a waste of time and traffic. Pick a colour that contrasts with the rest of the site so your buttons will stand out. Done. No A/B test needed.

 

8

 

To Know Whether Or Not You Should Be Testing, Ask Yourself This:

  • Do we have enough traffic and conversions to test?
  • Have we done the research first?
  • Is it a no-brainer (low in the hierarchy of conversions) that we can just implement rather than test?
  • Is it a button colour? 😉

You may also like