Conversion optimisation and A/B testing cannot be done without each other. That means that you are working regularly, or even daily, on setting up experiments. And if you are working on setting up many different tests at the same time, you will soon lose sight of the bigger picture. But fortunately, there is a solution…
The forest and the trees
When you run one test every month you have a good overview of the progress of the test from start to finish. When you run three tests every month, it starts getting complicated, but it is still manageable. If you run more than 3 tests – or even worse: if you have a number of tests running on multiple sites – you quickly can’t see the forest for the trees. The following steps always have to be taken:
- Briefing: the developer must be briefed on the B version.
- Development: the developer builds the B version.
- QA: when the B version is ready, Quality Assurance must be done to ensure that it works on any browser and operating system.
- Live: all settings are checked and the test can go live.
I hear you thinking, ‘Four steps, that’s not too bad, right?’
Well, actually, that’s a train wreck.
If your briefing is so accurate that the developer runs no risk of misinterpreting anything, then there is sure to be a functionality or scenario that you did not think about. During the development phase, there will be regular discussion between you and the developer about such (often small) things.
When you finally end up in the QA phase, it only gets worse: ‘When I refresh the page, the text changes back’, ‘Now that I see it, I think the header needs to be smaller’, ‘The button doesn’t work anymore’. A test can easily stay in QA for several days, with one issue following the other.
Now imagine that you have six accounts, each with four tests in different stages. Good luck with your overview.
The solution: collaboration tools for A/B testing
Forget listing ideas in a Word document or spreadsheet, emailing back and forth about briefings and issues you encounter, or Slack and Trello, in which you can communicate and work on priorities. That may work for a while, but it will soon become an unimaginable chaos.
Fortunately, some tools have recently entered the market to solve this problem. They have been developed for companies and agencies who work daily on A/B testing and who would like to keep the a broad overview of all tests. The most well-known tools are Effective Experiments, Iridion and Liftmap. We tried out the different tools, but we very soon favoured Effective Experiments. In our opinion, it’s the most complete tool on the market.
There are lots of possibilities in EE, but I will mainly focus on the functionalities that help you organise your tests.
How does it work?
First of all, you can add different accounts, allowing you to switch between one account and another with two simple clicks. In this example, I masked the names of our customers and gave them the creative names Account A through G 😉
By creating accounts, you can focus on the tests of that particular account. In the left-hand menu, you will see three categories that you will use most:
- Ideas & Hypothesis
1. Ideas & Hypothesis
On this page, you can drop all your testing ideas (from your research). You only need to click on ‘+ Add New Idea’ and you can then add the idea and indicate how you came up with this idea (user testing, heuristic analysis, etc.).
When you drag an idea to the ‘Hypothesis’ column, you can work it out further. As the column name indicates, you can form your hypothesis here, but also give the idea a PIE score. With a PIE score, you can set priorities: the test with the highest score can not only be set up quickly, it also has the most chance of winning. With your PIE score, you namely estimate the following:
- How much Potential does the idea have for winning?
- How big will the Impact be on revenue?
- And how Easy is it to set up the test?
By doing this with all your testing ideas, you can sort the column and run the tests with the highest PIE score first.
When you decide to test a hypothesis, drag it to the ‘Testing’ column.
Here you can then add additional details:
- How many versions your test contains and what you change in each of them
- Which devices it runs on
- Which page it runs on
Once you’ve filled everything in, you can find the experiment under ‘Experiments’ in the left-hand menu. Your test will appear by default in the ‘Planned’ column.
Didn’t I mention that? EE is excellent as a collaboration tool! You can add multiple users, with different permissions (from admin to view permissions), allowing you to collaborate with colleagues and more importantly: with your developer. In addition, you can also add customers so that they have insight into (the progress of) the tests.
In the project settings, you simply add notifications: when a test appears in the ‘Planned’ column, the developer receives an email and knows he can start building. He drags the test to the next column (In Development) when he gets started on it, so that you are informed of the status.
As soon as the developer has built the test, he drags the test to QA: now you get a notification. You can set this separately for each stage, and you can add (or delete) extra stages. This gives you a clear overview of the tests and their corresponding stage.
And if issues come up?
EE has also thought about that. When you click on a test in one of the columns, a screen opens with details about the experiment and where you can leave comments. Have you encountered an issue? Leave it in the comments and tag your developer (@Colleague), and he will be informed immediately!
This allows you to see in one program:
- Which testing ideas and hypotheses you have
- Which test has priority (PIE score)
- Which stage each test is in
- Whether and what kind of issues you have encountered and what their status is
3. Oh, one more thing: Reports
Did you see the column ‘Complete/Pending Report’ in the first illustration for 2. Experiments? When the test is complete, drag it to this column. A screen opens in which you can note the test results, what the test taught you, whether the test has a winner and what your recommendation is.
Once you have completed this, you save it. A report will be automatically generated that can be found in ‘Reports.’ That saves a ton of time! You can still edit the report by adding, for example, a logo, additional information and/or images, and, if desired, by translating English into Dutch.
After the tests have been developed, the data collected and the reports drawn up, the winners are implemented, and the analyses often vanish into a dusty folder somewhere on your hard disk or in the cloud. Until a time comes when you want to check on how useful your testing is. The search through the dust begins, and when you finally find them, the next question is: which test was the winner again? The alert reader already knows that you can find the analyses in the ‘Reports’ tab, but how can you quickly find the one report you’re looking for? And how can you quickly get insights without having to go through all of them?
Effective Experiments came up with something for that: the Query Engine. With this functionality, you can retrieve your reports based on certain criteria. You want to see all your reports on winning tests on the product page? You can do that in just a few clicks! Or do you want to see whether your user testing produces good test ideas? Check how many tests that were based on your user testing were successful. That can lead to very valuable insights. In just 5 seconds!
An easier life (well …)
As you can see, a lot is possible in EE, and we can no longer do without this tool. It’s an all-in-one program for your experiments that instantly gives a good overview of all your tests. In addition, it is also useful as a reference or for an evaluation.
Really, if you’re seriously working on A/B testing, I recommend you implement EE (or a similar tool like Iridion or Liftmap), because in the end it delivers what we all want: it makes your life a good deal easier (well, at least your work). ;-).