Best Practices for Running A/B and Multivariate Testing

Getting traffic to your B2B tech website is only one step in successful digital marketing. Your webpages are valuable tools that can be used to generate leads and sales, which is why it’s important to optimise them for conversion rate as well as search. A/B tests and multivariate tests are an essential part of this process.

What is conversion rate optimisation (CRO)?

Conversion rate optimisation, or CRO, is the process of strategically updating your website to improve conversions. 

For many B2B tech companies, the ideal conversion is turning a website visitor into a paying customer. This is known as a macro-conversion, and is typically the end goal. This isn’t the only conversion you can track and optimise for, though. There are a number of other micro-conversions, such as downloading a free whitepaper, that naturally occur throughout the customer journey. 

Your B2B CRO strategy should involve testing and tracking both micro- and macro-conversions.

How to test your CRO strategy

A/B tests and multivariate tests are two types of split tests. These tests allow you to trial the performance of a website change before officially launching it. 

What is the difference between A/B tests and multivariate tests?

Split tests involve creating two or more very similar webpages, releasing them to portions of your target audience, and monitoring performance. Correctly executing these tests as part of the CRO process can improve your conversion rate.

A/B testing

A/B tests involve making one change to a webpage and tracking the results. For example, you may decide to:

  • Move the “contact us” button from the bottom of your homepage to the top
  • Changing the wording of a call to action (CTA)
  • Rewriting your H1 copy
  • Adding or removing a pop-up

To conduct this test, you’ll need to compare your original page (called the “A” page, or the control) against the new version (the “B” page, or the challenger).

It’s important to remember that to run an A/B test you must only make a single change. If you need to change more than one change to the page, then you should implement a multivariate test.

Multivariate testing

Multivariate tests involve changing more than one variable. If you move the placement of a button during a A/B test, then you might move its placement, change its text and add an image during a multivariate test. 

While an A/B tests “A” and “B” versions of a page, a multivariate test can compare A, B, C and D versions all at the same time. This means there will be two or more challenger pages for every control.

When to use A/B tests and multivariate tests

You must run each test until you’ve gathered enough data to reach statistical significance. It is at this point that you can be confident that the results are directly related to your changes, and aren’t a matter of chance. 

When you test a higher number of variables, you’ll need a larger number of website visitors to get the right sample size for statistical significance. If you have a lot of website traffic and are experienced in running CRO tests, then a multivariate test may help you get results faster.

If your site does not have a high amount of traffic, or you’re new to conversion rate optimisation, then an A/B test is the place to start.

Steps for running a successful A/B test or multivariate test

Running effective A/B tests and multivariate tests requires careful planning and the use of conversion rate optimisation testing software. If licensing the necessary CRO tools and conducting all preliminary steps isn’t in your company’s scope of capabilities, you can always work with a conversion rate optimisation agency.

1. Define your conversion goal

Before beginning any conversion optimisation test, it’s important to decide on a desired action and outcome. Would you like to increase the number of form submissions, whitepaper downloads, or clicks on a contact button? Would you like to increase this action by 10 per cent, 50 per cent, or more?

2. Create a hypothesis

As with scientific testing, you need to create a hypothesis for what you think will happen during the split test. For instance, you might hypothesise that you will increase whitepaper downloads by 10% after moving the CTA button from the bottom of the page to the middle.

This hypothesis will guide the rest of your actions when setting up the test.

3. Create control and challenger testing pages

In order to test your hypothesis, you will need to create multiple versions of the same page. Using our whitepaper example from above, it might look something like this:

A/B test:

  • Version A (control): the CTA button remains in its current place at the bottom of the page
  • Version B (challenger): the button gets moved to the middle of the page

Multivariate test:

  • Version A (control): the CTA button remains in its current place at the bottom of the page
  • Version B (challenger): the button gets moved to the middle of the page
  • Version C (challenger): the button remains at the bottom, but increases in size
  • Version D (challenger): the button gets moved to the middle, and increases in size

4. Set up conversion rate tracking

Tracking the results of your split test is very important for effective conversion rate optimisation. Before your pages can go live, you’ll need to make sure you have the right tracking codes and tools in place. Metrics to track include:

  • Total number of page visits
  • Total number of conversions 
  • Website conversion rate

You may also choose to track movement through conversion funnels and scrolling or clicking actions. This data can provide you with insights that may be useful for future A/B tests.

To accurately test your web pages without running into search engine optimization (SEO) issues, it’s a good idea to utilise a split testing tool like Google Optimize. These tools will help you to release your page to segments of your audience and monitor the results.

5. Calculate the statistical significance threshold

Before launching your test, you’ll need to identify the statistical significance threshold. 

Ideally, you want to have at least 90 per cent confidence that your test results are not due to random chance. If you have a large audience to pull from and can increase this threshold above 95 per cent, that’s even better. 

Statistical significance calculators like this free one from SurveyMonkey can show you how many total visitors, and conversions, you need from each version of your webpage in order to reach significance. Some split testing tools will automatically calculate these values for you and stop the test when you reach the threshold.

6. Launch your test and monitor the results

Finally, you can launch your test and monitor the results. If you’re manually calculating the statistical significance, keep an eye on numbers and check your percentages regularly. 

Once you hit the point of statistical significance, stop the test and consider your results. Was your hypothesis correct, or were you surprised by the outcome? Do you need to run another test?

Other ways to utilise multivariate and A/B tests

Changing elements on a page isn’t the only way to utilise an A/B test or a multivariate test for conversion rate optimisation. You can also use these testing methods for things like:

  • Trialling the impact of changes to a sales funnel
  • Comparing the user experience between two landing page layouts
  • Improving website navigation and usability
  • Optimising pay per click (PPC) ad campaigns
  • Evaluating the effectiveness of adding social proof and testimonials to a product page

Similar split testing principles may be applied to paid advertising and email marketing, as well.

Possible problems with CRO tests

When not executed properly, split tests can have a negative impact on your online marketing strategy. Possible problems with conversion rate optimisation testing include:

  • Not utilising a large enough sample size
  • Testing too many variables at once
  • Using the wrong type of split test
  • Incorrectly segmenting the target audience
  • Not utilising CRO tools
  • Miscalculating statistical significance
  • Employing a statistical threshold that is too low
  • Not clearly defining your testing hypothesis

Best practices for A/B and multivariate tests

In order to mitigate some of the problems listed above, it’s important to:

  • Be clear about your goal and hypothesis.
  • Use the right tools. Don’t simply post multiple near-duplicate pages on your website and try to track them in Google Analytics. This can lead to duplicate content issues and skewed data among other problems. 
  • Run the test until you hit statistical significance. Without doing so, you may base future decisions upon results that were due to chance, not your changes.

Setting up your first CRO testing campaign

Working with a conversion rate optimisation services agency will help you get to a higher conversion rate much faster than trying to run the CRO process yourself. When choosing an agency, it’s important to look for specialists that understand the unique differences between B2B and B2C conversion rate optimisation. 

At Clarity Performance, our team of London-based CRO specialists is experienced in optimising campaigns for the B2B tech industry. We understand the unique nuances of your industry and clients, and can leverage our years of experience to deliver high conversion results for your brand. If you’re looking for a specialist conversion rate optimisation agency for your B2B tech company, check out our services here

Related Posts

CRO

What is the Difference Between Macro and Micro Conversions for B2B CRO?

By ClarityDevs 05 May, 2022
If you’re creating campaign landing pages and conversion funnels for your website, you probably have several intended conversion goals. For…
CRO

How heat mapping aids conversion rate optimisation

By ClarityDevs 20 April, 2022
If you're serious about developing a B2B conversion rate optimisation (CRO) strategy, you'll need to become familiar with the various…
Older Post

What Should Your Google Analytics Goals Be?

Newer Post

The subtle art of marketing to developers with Christie Fidura, Director, Global Developer Marketing at Salesforce