A/B Testing Plan Template

Template for planning effective A/B tests to improve conversions

An A/B testing plan template to help you execute A/B tests more effectively

A/B testing (also called split testing) is an important part of conversion optimization. While it can be tempting to trust your intuition when it comes to creating landing pages, email copies, call-to-action banners, if you simply make decisions based on “feelings”, you might be losing a lot of conversions that you might otherwise be able to get.

By running A/B tests, you can test out your hypotheses and use real data to guide your actions. This template helps you plan for an experiment in a more structured way. This ensures an experiment is well thought out and it also helps you tos communicate it more effectively with designers, developers and others who will be involved in implementing the test.

How to use this A/B testing plan template

1. Hypothesis

The key part of a A/B test is formulating your hypothesis as this basically guides the whole A/B test plan.

  • What problem are we trying to solve?
  • Its impact? (e.g. how big this problem is to our customers?)

Define the problem

In formulating the hypothesis, first you need to define the problem you want to solve. For example, you are an SaaS that offers free trial and you want to improve the traffic-to-lead conversion ratio (i.e. attracting more website visitors to actually sign up for a free trial). But that problem might be too broad to form an A/B test as you can simply test one variable in an A/B test to be effective (otherwise you won’t know which variable is causing the change).

So to narrow down the problem you want to solve, you need to find out the bottle-neck in the conversion funnel – where do people drop off the most? Are there any key information or call-to-action buttons that you expect people to read/click but they didn’t? You can use heatmaps and session recording tools like Hotjar and Fullstory to help you identify bottlenecks more easily.

Formulating the hypothesis

After narrowing down the problem you want to solve, you then need to make a hypothesis as in what causes those bottlenecks and what you can do to improve.

For example, you noticed most of the visitors will visit your “Features” page but very few of them will actually scroll past even half of the page so many features that you think are important are not actually viewed by the visitors. To improve this, one hypothesis might be using tab or toggle list design to make your page shorter and visitors can select to dig deeper into content that they are interested in by expanding the content.

Remember when formulating your hypothesis, change only one variable so that you will know it’s really that variable that is causing the change in conversion.

Result and success metrics

Now you have your hypothesis, the next is to plan how you are going to measure your results. Defining your success metrics carefully beforehand is important. Otherwise, if there is not enough tracking done during the experiment, it might be hard to draw conclusions and next steps at the end of the experiment.

2. Experiment setup

To communicate clearly to the implementation team, detail out the experiment setup that you will use to test your hypothesis. This include

  • Location – where will this experiment be set up? Provide the URL of the page that you are going to set up your test with annotated screenshots on which parts you would like to change
  • Audiences – will all visitors or users be able to view the experiment? Or you will only allocate x% of traffic to the experiment? Lay out the details with the rationale behind
  • Tracking – with the success metrics that you have defined, what tracking needs to be set up? Try to provide more explanation on how you will use this metrics to analyse the results to ensure the implementation team will set up the tracking the right way.

3. Variations design

In this section, describe what variations you would like to test.

Layout the design work related and add diagrams, mockups and designs related to the confirmed variation that you’d like to test. Gathering all these in one place helps your development team understand the context much better.

4. Results and learnings

So at the end of the planned experiment period, you get all the stats but does a better conversion rate for one variation really conclude that variation is really better? You need to run a test of statistical significance to see whether your results are really statistically significant. You can use this A/B Testing Calculator by Neil Patel to check the results easily by inputting the sample size and conversion numbers of the variations.

If one variation is statistically better than the other, then you have the winner and can then complete the test by disabling the losing variation.

But if neither variation is statistically better or the original version is still better, then you might have to run another test.

Document any learnings you got from this experiment so that it can help you better plan your future ones.

5. Next actions to take from this experiment

From the results and learnings section, list out the action items that you would need to do after the experiment. Is that you would need to disable the losing variation? Is there more elements on that page that you want to test to further improve conversion rate?

A/B testing is a continuous process. Hope this template can help guide you in executing better split tests.

Start with this template

Related templates

Standard Operating Procedures (SOPs) Template

SOP Template to get teams aligned on business processes

OKR Template (Objectives & Key Results)

OKR examples to help your team set measurable goals

Interview Scorecard Template

Sample Recruiting / Hiring Scorecard for making better hiring decisions