Category Archives: Multivariate Testing

How To Get Started With Google Optimize

Google’s Free A/B Testing Solution:

Do you want to conduct A/B tests using a simple to use visual editor but you don’t have the budget to afford the established tools? Well, you can now conduct A/B and multivariate tests for free using Google’s new solution, Optimize. Register for free so that you can start conducting online experiments. Find out what does and what doesn’t work on your site and stop having to rely on best practice!

What is Google Optimize?

Google Optimize is the free version of Google Optimize 360 which is an A/B testing and personalisation platform. Optimize allows marketers to run up to 3 A/B tests (or multivariate tests) at a time and provides a simple to use visual editor that enables a non-technical marketer to set up experiments in a matter of minutes. It also integrates fully with Google Analytics.

 

Google Optimize (free) and  Google Optimize 360

So, what are the main differences between Google 360 and Google Optimize?

Limit on number of tests. Optimize only allow you to run up to three concurrent experiments. For many small and medium sized sites that have no one dedicated to conversion rate optimisation this may not prove to be a major restriction. I know even large websites that struggle to run more than a couple of tests at once and would save tens of thousands of pounds if they switched to Google Optimize.

Optimize allows you to conduct multivariate tests, but it limits you to 16 variations. Again, for many sites this may not be a problem because the more variations you have the more traffic you need to complete the test within a reasonable time scale.

No audiences. The free version of Google Optimize does not allow the use of Google Analytics audiences to target which visitors to include in tests. However, there are other targeting features available to use.

Objectives have to be pre-set. Unlike Optimize 360 there is no ability to analyse additional goals after the test has been set up. Although this is a useful feature it can lead to lazy thinking as you should have a single success metric linked to a strong hypothesis.

How to get started with Google Optimize?

Assuming that you already have Google Analytics implementation of Google Optimize is a simple process that looks like this.

  • Create an account and container
  • Link to Optimize to Google Analytics
  • Paste snippet into Google Analytics script
  • Add snippet of code to eliminate flicker from A/B tests

When you register for Optimize you will be asked to create an account for your business and then a container for each website. You should then link your container (the individual website) to your Google Analytics account as this allows the two tools to share data. I recommend you do this as it allows for analysis of your tests within Google Analytics.

Optimize then prompts you to add the Optimize snippet to your site. This is only a single line of code that is inserted before the last line of your Google Analytics JavaScript.

Image of snippet of code for implementing Google Optimize

To minimise page flickering from A/B tests Optimize also recommends that you add some additional code to each page immediately before the Google Analytics code.

Create an experiment:

You are now ready to set up an experiment and Optimize gives you three types of tests to run.

  • A/B tests: Two or more variants of a single page
  • Multivariate test: Two or more different sections of a page to be tested
  • Redirect test: Sometimes called a split test where you test one or more whole new page or path on a separate URL.

 

Image of types of experiments in Google Optimize

My example here is an A/B test where I have changed the heading to make it shorter and snappier which also brings more content above the fold.

Image of heading A/B test from Conversion-uplift.co.uk

Visual editor:

Optimize has a simple to use what you see is what you get (WYSIWG) visual editor which allows you to add, remove or change content. To access the visual editor you will need to download the Chrome extension for Optimize or use a browser that supports CSS3 selectors.

You can now create the variant you want to test using the visual editor or specify the URLs you want to test if you plan a redirect test.  To make changes using the visual editor click on the heading or container you wish to amend. This will then open up the menu with quick tools to make simple changes to text, typography and orientation. If you select the Edit Element button you will see more advanced options which include Remove, edit text, edit HTML and insert HTML.

Image of edit and advanced edit options

Make sure you save your changes and confirm you are “Done” to create your variant.

Setting Objectives & Targets:

Before you publish your experiment you must set your objectives and decide what audience you want to target. If you have linked Google Analytics to your account you can use any goals that you have set up in GA as an objective. Optimize also has Pageviews, Bounces and Session Duration as default objective options.

Optimize allows you to select up to three objectives for each experiment. For my A/B test I selected Bounces and Pageviews. You should then decide which users you want to target as this needs to be set before the experiment begins. Click on the “CREATE RULE” button to open the side menu.

Image of how to create a rule in Google Optimize

For many tests you may want to only target new visitors to your site. This ensures that visitors won’t have previously seen the default experience which could otherwise skew your test results. Google Analytics sets a cookie on the user’s first visit to your site. This means you can target an experiment to new unique first time users by specifying a short value for Time since first arrival. To set this up create a behaviour targeting rule like this:

Targeting new visitors – Example 1

Variable Match type Number Value
Time since first arrival Less than 10 seconds

 

To target a test to any page that a new user visits in the first hour since they first landed on your site, create this behaviour targeting rule:

Targeting new visitors – Example 2

Variable Match type Number Value
Time since first arrival Less than 60 minutes

 

The current targeting options are as follows:

URLs. Target individual pages and sets of pages. URL targeting enables you to pick the page where your experiment is to run. This allows you to target a single page, a narrow subset of pages, or Hosts and Paths.

Behaviour. Target visitors arriving on a site from a specific channel or source. It allows you to target first time users and visitors from a specific referrer.

Geo. Target users from a specific city, region or country. When you type in the Values field, you will see suggestions from the AdWords Geographical Targeting API to speed up rule creation.

Technology. Target visitors using a specific browser, operating system or device. Optimize tracks the browser’s user agent string to identify which browser a visitor is on, what version and on which operating system.

JavaScript Variable. Target pages using JavaScript variable values. This allows you to target according to a value in the source code of the page in the form of a JavaScript variable.

First-party cookie. Target the value of a first-party cookie in the user’s browser. This allows you to target returning visitors who will already have a first-party cookie from your site.

Custom JavaScript. Target pages using a value returned by custom JavaScript. This allows you to inject JavaScript onto a page, then target your test based upon the value in the JavaScript returns. For example if you wanted to target users visiting your site during the morning hours you could write a JavaScript function that returns the current hour. Then set a targeting condition that looks for a returned value that is less than 12.

Query Parameter. Target specific pages and sets of pages. Query parameter targeting explicitly targets values that occur in the query string of a URL. These are found between the question mark and the hash mark in the URL query string.

Data Layer Variable. Rather than referencing JavaScript variables in your targeting rules, you can reference key-values pairs that are contained in the data layer. You may want to create a targeting rule that uses shopping cart data or other information available on the page. For example, you might want to target users who have just completed a purchase of more than £100. This information could be stored in the data layer and so Optimize could retrieve it from there.

Personalisation:

These targeting options allow you to easily use Google Optimize for personalisation as well as for testing. For example, you could use Optimize to display a different image or heading for new visitors compared to returning visitors. Alternatively you could change the heading or message for visitors arriving from a specific source of traffic or customize text according to the user’s location.

Reporting test results:

To view the performance of your test variants simply go to your experiment and select the Reporting tab in the top left-hand menu. Alternatively you can view results in Google Analytics by selecting Behaviour>Experiments. This provides a simple improvement overview which compares your variant with the original experience.

Image of reporting from Google Optimize

Here we can see that in my headline test variant 1 currently has a 69% chance of being the best performing experience. However, the test had only been running a few days and so it was far too early to make any definite conclusions.

Length of tests:

Google Optimize recommends that all tests are run for at least two weeks. This allows for the weekend effect as people often behave differently during the week when they are at work compared to when they are at home for the weekend. It is also important to consider how long your business cycle is so that you don’t end a test before a full cycle has ended.

After the test has been running a reasonable length of time and you have a sufficiently large sample of users included in the test  Optimize will display a definitive recommendation about the test. This is very useful if you are new to testing.

Conclusion:

For a free tool, Google Optimize is a powerful and easy to use A/B testing engine that will meet the needs of most small and medium sized websites. It is by far the best free testing solution currently on the market and it has most of the functions and capabilities of paid for solutions.

It allows companies with small or even non-existent budgets to conduct tests and begin to personalise their user experience. Google Optimize may be a game-changer as far as A/B testing is concerned. Expect to see more organisations begin to run tests and experiment with personalisation. Given the cost of some paid for solutions I would expect some organisations will consider switching to Optimize.  If their current testing solution is not being fully utilised they could potentially save thousands of pounds a year by switching to Optimize.

Related posts:

Optimisation process – 8 steps guaranteed to boost your conversion rate. 

Importance of web analytics – 18 Free & Paid Web Analytics Solutions.

Types of A/B tests – How to use A/B testing to optimize your website.

Strategy – How should you prioritise your A/B test ideas?

Thank you for reading my post and I hope it has inspired you to create a Google Optimize account and start running experiments and test personalisation on your site.  If you found it useful please share using the social media icons below.

You can view my full Digital Marketing and Optimization Toolbox here.

To browse links to all my posts on one page please click here.

  • About the author:  Neal provides digital marketing optimisation consultancy services and has worked for  brands such as Deezer.comFoxybingo.com, Very.co.ukpartypoker.com and Bgo.com. He uses a variety of techniques, including web analytics, personas, customer journey analysis and customer feedback to improve a website’s conversion rate.
  • Neal has had articles published on website optimisation on Usabilla.com  and as an ex-research and insight manager on the GreenBook Blog research website.  If you wish to contact Neal please send an email to neal.cole@conversion-uplift.co.uk. You can follow Neal on Twitter @northresearch, see his LinkedIn profile or connect on Facebook.

Don’t Let This Bias Destroy Your Optimisation Strategy!

Avoiding Logical Errors in Website Optimisation:

During World War II, researchers at the US Center for Naval Analyses were given a difficult problem to solve.  They were asked to recommend where to reinforce US bombers to reduce aircraft losses over enemy territory.  They decided to conduct a review of the damage inflicted on US bombers that returned from combat missions.

What we see is all there is:

Naturally they recommended that armour should be added to those areas that showed the most damage on the planes they assessed. But the statistician Abraham Wald pointed out that the study was suffering from survivorship bias. They had only considered aircraft that survived their missions as the bombers not included in the analysis had been shot down.

Wald argued that the holes in the returning aircraft represented areas where a plane could take damage and still return home safely. Using his insight he recommend they reinforce the areas where returning planes showed no damage. Theses places he thought were likely to be the areas that if damaged  would prevent a plane from returning safely home.

Example of survivorship bias from US airplane in 2nd World War
Image Source:

What is survivorship bias?

Survivorship bias is one of the most common logical errors that optimisers make as it plays on our desire to deconstruct success and cherry pick data that confirms our existing beliefs (see confirmation bias). People are prone to comparing survivors with the overall average despite evidence that survivors have unusual properties, namely that they have been successful.

By only examining successful outcomes we tend to become over-optimistic and may set unrealistic expectations about what optimisation can deliver in the short-term. We have a tendency to ignore the many more tests that have failed to deliver an uplift and only focus on our successes. As a result we tend to overestimate the importance of skill and underestimate the role of luck in the process.

To manage expectations appropriately consider:

  • Huge uplifts from tests don’t happen very often.
  • Testing the low-hanging fruit will not give you a competitive advantage.
  • A majority of tests don’t achieve an uplift. However, negative or neutral tests still provide valuable insights, so don’t ignore them.
  • Conversion rate optimisation is a long-term strategy and not a tactical sprint.
  • Tests that work for one site may not work on a different site. Each site is unique and has its own customer base.

Survivorship bias can also lead to misleading conclusions and false beliefs that successful members of a group (e.g. VIP customers) have common characteristics rather than are the result of a process they have completed. For example, very few, if any, customers are born as VIPs.  Optimisers need to be careful to avoid the following traps resulting from survivorship bias:

Understand visitor types:

Visitors are influenced by the process they complete online. Be careful about including  returning visitors or existing customers  in your A/B tests.

Returning visitors are problematic not only because they may have already been exposed to the default design, but also because most visitors don’t return to a site. Returning visitors are survivors because they didn’t abandon your site and decide never to come back due to negative aspects of the user experience. They weren’t put-off by your value proposition, the auto-slider, long form registration or other aspects of your site that may have caused some new visitors to bounce. They are also likely to have higher levels of intent than most new visitors.

Existing users are potentially even more biased as they have managed to jump through all the hoops and navigate around all the barriers that many other users may have fallen at. They have also worked out how to use your site and are getting sufficient value to want to continue with using it. This means they are likely to respond very differently to changes in content than might a new visitor.

This does not mean you cannot conduct A/B tests with returning visitors or existing customers. You can if the objective is appropriate and you don’t assume the test result will apply to other visitor types. Just be careful about what you read into the results.

Examine user personas:

Similarly each user persona may have different intent levels due to the source of traffic or other factors influencing behaviour. For instance be careful with including Direct traffic in you’re A/B tests as you have to question why they would type your URL directly into a search engine if they are really a new visitor. Perhaps some of these visitors have cleared their cookies and so are in fact returning visitors?

Why do uplifts sometimes decay?

Survivorship bias can also result in management questioning the sustainability of uplifts. When you first launch a tactical change to your website, such as a promotional pop-up, it is something new that none of your visitors will have seen before.

Example of how to ask a question to get commitment for improving blog sign-ups
Image Source:

This may result in a significant uplift in your overall conversion rate for both new and returning visitors. However, as a proportion of visitors seeing the prompt for the first time will have signed up, these users will no longer be part of your target audience as they have created an account.

As a consequence this will automatically reduce your overall conversion rate over time as those who are going to be influenced by the pop-up sign-up and those who are not don’t. Further, as more visitors come back to the site after experiencing the new pop-up the proportion of non-customers who have not seen this particular pop-up before will decline to just new visitors. As returning visitors become acclimatised to the promotional pop-up its effectiveness is likely to decline among this type of visitor.

This can make it appear the uplift was not sustainable. However, if you analyse new visitor conversion you are likely to see that the uplift has largely been maintained. But even here there may be a notable decay in the uplift over time as a proportion of returning visitors regularly clear their cookies and so are tracked as new visitors by your web analytics.

This needs to be explained to stakeholders to manage their expectations for the overall conversion rate. If this is not understood this is sometimes used to challenge the sustainability of uplifts from conversion rate optimisation.  To respond to this phenomena it is worth revisiting changes on a regular basis to review conversion rates and to test new variants if necessary.

Frequency of email and push notification campaigns:

A common question that digital marketers have is what is the optimum frequency of email and push notification campaigns. Often people assess this by analysing existing user engagement. However, relying on existing users is a heavily biased approach because these customers have self-selected themselves on the basis that they are happy with your current frequency of engagement. Those who are not happy with the level of contact will have already unsubscribed.

Instead you should test email and push notification contact frequency using an unbiased list of new users who have recently signed up and have not received any campaigns so far. Provided the sample size is large enough and they have never been included in CRM campaigns you should test contact frequency using this clean list of new users.

Pre-screening traffic:

Be cautious about rolling out changes that generate uplifts for pre-qualified visitors. Just because a landing page produces an uplift from a highly engaged email list you cannot assume it will help convert unqualified traffic.

Different types of CTAs:

Why is it that web designers are on the only kind of designers who think that all calls to action (CTA) should look identical? The reason aircraft cockpits have different types, sizes and colours of switches and buttons is to clearly differentiate between their different uses. A newsletter sign-up CTA is very different from an add to basket button or a buy CTA. The nature of the user’s decision needs to be reflected in the design of the CTA and so it is dangerous to prescribe in your brand guidelines that all CTAs look the same.
Types of CTAs

No, you should optimise a page for the specific CTA that is required for the stage in the user journey. As a user proceeds through the conversion journey their intent and needs change. This should be reflected in the design of the CTA. Just because a CTA works on a landing page does not mean it will be optimal for a product page or check-out.

Law of small numbers:

Be careful not to rely on small sample sizes when analysing web analytics or test results. The law of small numbers means that we have a tendency to underestimate the impact of small sample sizes on outcomes. Essentially we often only get certain results because of the unreliability of small numbers. So few survivors are left we get extreme results.

Take care with multivariate tests:

Avoid having too many recipes (i.e. variables being changed) in your MVTs as otherwise you will end up with small sample sizes. It may be better to concentrate on testing one area at a time with a well-designed A/B test. Often a slower optimisation process staying within your traffic capabilities is more reliable than trying to overdo multivariate testing.

Don’t’ take users literally:

Qualitative research and usability testing can provide useful insights for understanding user needs and for developing hypothesis. However, most users don’t reply to surveys on or off-line. Further, neuroscience research indicates that a majority of our decisions are made by our non-conscious brain. This means that we are not fully aware of why we make many of the small decisions when navigating a website. Always make decisions based upon user’s actions and not what they say.

Conclusion:

People are prone to survivorship bias because they lack a good understanding of statistics and so training in this area of optimisation will make your team stronger and less likely to fall into the trap of neglecting users who don’t survive a process.

Thank you for reading this post. If you found it useful please share using the social media icons below.

You can view my full Digital Marketing and Optimization Toolbox here.

To browse links to all my posts on one page please click here.

  • About the author:  Neal provides digital marketing optimisation consultancy services and has worked for  brands such as Deezer.comFoxybingo.com, Very.co.ukpartypoker.com and Bgo.com. He uses a variety of techniques, including web analytics, personas, customer journey analysis and customer feedback to improve a website’s conversion rate.
  • Neal has had articles published on website optimisation on Usabilla.com  and as an ex-research and insight manager on the GreenBook Blog research website.  If you wish to contact Neal please send an email to neal.cole@conversion-uplift.co.uk. You can follow Neal on Twitter @northresearch, see his LinkedIn profile or connect on Facebook.

How is AI Disrupting Conversion Rate Optimisation?

Using Evolutionary Pressures To Optimise:

Digital marketing is a zero-sum game – it’s survival of the fittest. Brands have to respond to changing customer needs and pressure from competitors or they go out of business.

What if you could use these evolutionary pressures to automatically adapt and adjust your site according to what has the highest conversion rate? And if your audience changed, perhaps due to a TV campaign, wouldn’t it be great if your site responded by optimising your user experience for the new audience profile? But rather than only improving a single page, what if it could simultaneously optimise multiple pages in the user journey?

Well, with the advent of AI and evolutionary algorithms this time has arrived! Sentient, a company born out of the minds that developed the technology behind Apple’s Siri, has come to market with Ascend. Sentient have combined evolutionary computation (a form of AI which uses mechanisms inspired by biological evolution), and deep learning to create a market leading optimisation solution. What is unique and exciting about Ascend is that it is capable of autonomous decision-making to assist businesses improve their bottom line and enhance the customer experience at the same time.

 

What are the benefits of Ascend?

Sentient Ascend is the first testing and optimisation solution developed by integrating AI, evolutionary algorithms and deep learning technology. As a result it has the capability to revolutionise how testing and optimisation is carried out. The main benefits of Ascend are:

  • Massively complex multivariate tests that have over 1 million possible combinations can be completed with Ascend that would be impossible with traditional MVT solutions. Below is an example of the kind of test that is now feasible with Ascend.

 

Image of multivariate test with over 1 million possible combinations
Source: Sentient Ascend

 

  • Ascend requires lower traffic levels than traditional optimisation solutions because it uses what it discovers about the performance of a particular combination of elements to predict how that combination will influence the conversion rate in the future.
  • Testing is completed with greater speed and double digit uplifts in conversion rates are normally achieved within the first 2 months of employing Ascend. Recently completed tests have achieved between a 12% and 48% uplift in conversions.
  • It can optimise multiple pages simultaneously to improve conversion rates throughout a user journey.
  • Indeed, for underwear brand Cosabella, Ascend tested 15 different changes to the homepage header, category page, product page and shopping cart design. Using standard multivariate testing would have required 160 tests, instead of the automated process that Ascend manages for you. This improved conversions by 35% compared to the control experience.
  • Automates the testing program so that once all your ideas have been input into Ascend it employs all the power of AI to adapt and respond to user interactions to identify the best performing combination of changes to your site or web app.
  • It allows for tests to be paused and new ideas to be input into the testing program as and when required.
  • Automatically adapts to a change in the visitor audience profile without the need for any manual intervention.

How does evolutionary computation work?

To give the evolutionary algorithm a purpose it is first necessary to define a fitness measure. With conversion rate optimisation (CRO) the fitness measure should be the conversion metric that you wish to optimise for such as sales, revenues, average basket value, first time deposit or sales leads. It is important to take care in selecting your fitness metric because it needs to be a characteristic that makes one experience (or algorithm) better than another.

With an evolutionary algorithm each page (i.e. a selected combination of elements) is classed as a genome and it uses genetic operators (i.e. selection, mutation and crossover) to create and maintain genetic diversity. In the example below two high performing pages (see column on the left) have been identified through selection (i.e. survival of the fittest).

Image of how Sentient Ascend uses evolutionary algorithms to optimise designs
Source: Sentient Ascend

However, a further generation of solutions can then be created through crossover (i.e. recombining elements from the two high-performing genome) to create children; the middle solutions above. Mutation (i.e. randomly altering one element in the child’s chromosome) encourages diversity amongst solutions and seeks to prevent the algorithm converging to a local minimum by avoiding solutions becoming too similar to each other. This is shown in the right column above.

Although each operator individually seeks to improve the solutions generated by the algorithm, the operators work together with each other to create an optimal solution that would not be possible if they were used in isolation of each other.  In the first instance the algorithm simply evaluates each page (i.e. genome) to identify if it performs well enough to be a parent for the next generation. Otherwise it will be rejected.

Image of illustration of how an evolutionary algorithm works

This allows literally thousands (out of millions) of experiences to be tested in a short space of time. But as Ascend learns which combination of elements create the best performing designs it automatically adjusts experiences according to how visitors respond. Below is an example of changes that Ascend can evaluate as part of single multivariate test.

The advantage of this technology is that it can create page designs that convert better than those designed by people because it automatically searches for unexpected interactions between elements. It is also doesn’t suffer from human misconceptions or biases, which means that it can generate surprising ideas that we might never have thought of ourselves.

What’s the catch?

Like any optimisation software Sentient Ascend relies on the quality of ideas and designs to generate uplifts in conversion. It is therefore essential to invest in the people who will be using Ascend to ensure they have the required skills and support to get the most out of this amazing solution.

To generate a sufficient quantity of ideas and designs for testing will take some time and resource as you are essentially compressing twelve months or more of testing into a single month or two. This is an analytical and creative process and so it will require the input and approval from various stakeholders if it is to be a success.

To keep Ascend fed with additional ideas after the initial test will also require further planning and support to ensure you get value for money from the solution. There is certainly a danger that rather than focusing on quality hypothesis users might be tempted to throw every idea into the mix without proper evaluation and prioritisation. This would be a recipe for a sub-optimal result as with any model if you put garbage in you will get garbage out.

As with any multivariate test it is advisable to run an A/B test to validate the winning experience. However, no worries, Ascend can manage this for you or you can use your existing A/B testing solution to conduct the experiment.

Conclusion:

Sentient Ascend makes most existing testing software obsolete because it offers an automated platform for massively multivariate conversion optimisation. This allows you to test an enormous number of ideas in a shorter time period than is possible with existing solutions. It is also more efficient at discovering new combinations of elements that result in uplifts in conversion due to the evolutionary nature of the algorithms.

Further, as you can add new ideas as you test you don’t need to wait for the test to end to respond to changes in campaign execution or strategy. You can just keep testing continuously if you have the ideas.

Note: Conversion Uplift is now an accredited partner for Sentient Ascend.

Thank you for reading my post and if you found it useful please share using the social media icons below.

You can view my full Digital Marketing and Optimization Toolbox here.

To browse links to all my posts on one page please click here.

  • Neal has had articles published on website optimisation on Usabilla.com  and as an ex-research and insight manager on the GreenBook Blog research website.  If you wish to contact Neal please send an email to neal.cole@conversion-uplift.co.uk. You can follow Neal on Twitter @northresearch, see his LinkedIn profile or connect on Facebook.

 

How To Optimize Your Website’s Performance Using A/B Testing

 6 Experiments For Higher Conversions:

 6 types of tests to optimise a website page

 

 

When people talk about A/B testing they often refer to call to action button changes and landing page tests.  They also sometimes talk about only changing one element on a page at a time to ensure you can tell exactly what generated the difference between the two experiences. This last point of view can be quite misleading and could hold back your optimization progress.

You need to base your testing programme on a best practice and systematic process of discovery, evidence and prioritisation. But once you have that in place you also need to consider how to build a test plan for each of your key pages or journeys.

This brings us to the question of what are the main types of tests that you should be including in your testing roadmap. I’ve outlined below six testing approaches to consider and you should be employing all of them to optimize your site and improve conversions.

1. Innovation Tests:

Unless you happen to work for Google or some other mega website you have to change more than one element at a time if you are to make quick progress in your optimization journey.  An innovation or re-direct test allows you to experiment with something completely different. This gives you the opportunity to ensure the new page is more aligned to your business goals. The idea is that you can leave all the baggage of the existing page behind and design a radical new experience that will allow you to leapfrog to a much higher conversion rate.

Find an important web page, one with lots of traffic and a conversion rate that you believe can be significantly improved upon. You can then use a heuristic evaluation of the page to identify areas for improvement and use the other stages of the optimization process to gather further insights to help you construct your new innovative design.

As the design is radically different from your existing page you may want to manage the risk that it actually reduces conversion by starting it on a relatively low proportion of traffic. However,
after a week or so, if you are not seeing a big drop in conversion you can increase the proportion of traffic that sees the new page to reduce the time it will take for the test to complete.

 

2. Optimise and Multivariate Tests:

Once you have found a new innovative design that performs better than you existing page you should look to dissect it to understand how you can further enhance its effectiveness. Provided you have sufficient traffic multivariate testing (MVT)  can be used. Unlike A/B testing MVT allows you to change content within multiple sections of the same page and compare all the possible combinations against each other. For example if you wanted to test changing two sections on a page and have two variables for each section that would generate 8 combinations.

2 x 2 x 2 = 8

However, adding just one more variable in a single section
increases the test combinations from 8 to 12.

2 x 2 x 3  = 12

MVT’s have the advantage that they allow you to isolate many small page elements to understand their individual impact on conversion.  You can also evaluate interaction effects between multiple independent elements to find compound effects.  This can save you time as you don’t have to create and test many different variations for a page element that might not even have much impact upon your conversion rate.

On the downside MVTs require more traffic to achieve statistical confidence than an A/B test. If you don’t have the traffic to support a complex MVT limit the number of combinations  or conduct a series of A/B tests instead. With MVTs you need to  ensure that all variations within each section make sense together. Once the MVT has identified which page elements contribute most to
conversion you should validate the winning combination using an A/B test to check that they deliver the promised uplift.

 

Image of two web pages with different button contrast

Source: NickKolenda

3. Real Estate Tests:

Although you may now have a high performing page, how do you
know that all the elements on the page are in the best location? Some of the elements on the page could be poorly performing from a conversion perspective because they are in a sub-optimal location. Perhaps your main call to action is too far down the page or testimonials are taking prime real estate above the fold and they would be equally as effective further down the page, just above
the fold.

Image of two webpage click heatmaps

Source: ClickTale.com

Never assume that elements are in the best locations.  Your analytical tools, such as click and mouse movement heatmaps  should provide evidence that certain elements are not getting the attention you might expect, but to confirm this you will need to work with your web designers to develop tests that challenge the existing location of key assets on the page.  Try moving elements to different locations on the page but ensure that the page flow still works as otherwise that could influence the test result.

4. Inclusion/Exclusion Test:

Is that auto-rotating carousel really improving conversion? This is the stage in your page optimization process where you start turning off elements on your page to identify the conversion influences. If you remove your carousel from your homepage and you see a positive impact on conversion this tells you that you either have a poorly designed carousel or that you could use that prime real-estate for other types of assets that might increase
conversion.

This type of test is ideal for pages like your homepage that have any different elements on them and could benefit from being de-cluttered. Having unnecessary assets on a page can be distracting and reduce engagement at an important stage in the user journey.  If an element is removed and there is no impact on conversion this could also be considered for removal or it could be moved to a less important page or location.

When removing an asset has a negative impact on conversion you know to retain it as showing it clearly improves conversion. However, you should then do follow-up A/B tests on this element to determine the best design for this type of asset.  Be cautious about removing assets that when removed show a positive impact on conversion if the element relates to specific use cases or conversion goals. Maybe the element has been poorly designed or is difficult to understand. If you have any evidence that this might be the case
A/B test different variants before deciding to remove from the page.

5. Segment and Target Your Tests:

 

image of tangerine segments
Source: Freeimages.com

If you treat all your visitors the same you can only expect to have an average conversion rate. By definition some of your test variations will better meet the needs of certain visitor segments and as a result they may convert significantly higher for that group, but less well for other types of visitors. To further improve your conversion rate you should evaluate how you can segment and target your tests to create experiences designed to better satisfy
the needs of individual customer groups.

This approach will also boost your conversion rate because it leads to a much more dynamic website that responds to the needs of different user segments.  Set up key visitor segments (e.g. new and returning customers) in your analytics that you want to analyse and target with different content. This allows you to analyse your test
results to identify customer segments that performed significantly better than your average conversion uplift. You can then serve your winning test experience to all those visitor types that are more responsive to your new content.

Content automation is increasingly encroaching into this space and although it is a great tool, it is not a silver bullet. You can only
automate the content you have and if this is not optimal and engaging automating it will be of limited value. You should use A/B testing first to help create relevant content and understand how individual visitor segments respond to different user experiences. This will improve your chances of producing content that benefits from automation and is responsive to customer needs.

6. Test Iteration:

To avoid random and ad-hoc testing you should always base your tests on insights gleaned from previous tests or test additional assets following-on from an initial test.  Testing is a continuous process that enables your website to evolve gradually to better satisfy your customer needs and provide new insights to enhance your content marketing. A test and learn process is a much more scientific approach to website improvement than completely
redesigning your website from scratch.

Image of Test and Learn Process of A/B testing

 

In Conclusion:

By using these strategies to create a systematic plan for optimising
key pages on your site you are more likely to deliver substantial and
sustainable uplifts in conversion.  Each type of test is designed to provide specific insights and allow you to further enhance your conversion rate.

Never assume you have come to the end of your journey as your competitors will look to respond to your optimization strategy and disruptive technologies may change customer behaviour.  You will need to continue the optimization
process if you want to respond to changing visitor needs.

Thank you for reading my post. If you found it of interest please share this post by clicking on the social media icons below.

You can view my full Digital Marketing and Optimization Toolbox here.

To browse links to all my posts on one page please click here.

  • About the author:  Neal provides digital optimisation consultancy services and has worked for  brands such as Foxybingo.com, Very.co.uk and partypoker.com.  He identifies areas for improvement using a combination of approaches including web analytics, heuristic analysis, customer journey mapping, usability testing, and Voice of Customer feedback.  By  aligning each stage of the customer journey  with the organisation’s business goals this helps to improve conversion rates and revenues significantly as almost all websites benefit from a review of customer touch points and user journeys.
  • Neal has had articles published on website optimisation on Usabilla.com  and as an ex-research and insight manager on the GreenBook Blog research website.  If you wish to contact Neal please send an email to neal.cole@outlook.com. You can follow Neal on Twitter @northresearch and view his LinkedIn profile.