Skip to Content

Successful Website Optimisation Part 2.

The author

Chris Rowett

Director of Performance

Welcome to Part Two of Successful Website Optimisation. To continue from last month's blog, which you can read here, here are some more tools to choose the right pages and tests to run:


It is also important to understand how visitors use your chosen test page, so you can be more confident that you are testing changes to important elements. We use ClickTale for in-page analytics before and during testing.  It tracks mouse movement and clicks, and reports this information for a large sample of visitors.  This gives very valuable information about the important elements of a page, and the unimportant. You can see how far the average visitor scrolls down the page, which is very useful to know if there is important information below the fold. The biggest benefit of ClickTale is simply its ability to add some meaning to the results that you see.  Pages that fail can be analysed and you can learn so much about why this happened, which is invaluable.

User Testing

ClickTale shows us where visitors are moving their mouse cursor and clicking, but it doesn’t tell us why they act that way.  We can guess, but it is much better to ask real visitors. We simulate real visits by asking paid participants to complete tasks on the website typical of an average user.  They are also asked to talk us through their rationale as to why they are using the site in the way that they are and to highlight any thoughts about the site in general. Whilst we understand that this is never going to be 100% true to life, the sheer volume of ideas generated from the feedback is perfect for our needs.  The feedback simply acts as a guide for things to test, starting with the elements that caused the most friction. You can also use a survey tool such as KissInsights to generate a small amount of very valuable feedback from real visitors.

Constructing Tests

With all the analysis concluded, you should be in a position to choose a page and a test to run on the page.  It is always a good idea to create a hypothesis of your expected outcome for the test: A more visible call to action will simplify the buying process and increase the likelihood of a visitor buying a product You will be running variations of the page which aim to prove this hypothesis.  We ran a test similar to this, and used a range of sizes and colours in our variations to establish which size and colour led to the highest conversion rate (Large & Red incidentally). Here are some key areas to focus on: Friction  - is it easy to perform the actions necessary to complete the goal? Anxiety  - are there trust elements (such as secure payment) that will remove fear of purchasing? Focus      - is it clear what the goal is & how to achieve this (are your goals the same as your visitor)? Incentive   - is there a reason why I should buy from this company? Value      - what more can I expect? Motive   - Why should I buy now? Often you will need to have pages (or changes) designed and developed for the test, which requires the analyst to produce clear wireframes.  It is important to be very clear about your requirements as a misunderstanding will lead to a test page that does not prove your hypothesis.

Running Tests

There are a number of testing tools in the marketplace which allow you to monitor the difference in performance between different test pages.  It is important to choose a tool which allows you to track the important metrics for your objective. Google Website Optimiser is great for simple tests involving a single objective such as conversion rate, whilst other tools like Omniture allow you to see revenue impacts and segment across different audiences. It is good practice to clear your cookies and enter the test a few times to see if it works as expected.  You should then be prepared to stand back until a reasonable number of conversions have gone through the test.  The worst mistake is to make assumptions, or panic very early on based on only a few conversions.  Conversions will always fluctuate dramatically and only over time will you see the true picture of which page is performing best. We often aim for 99% significance after at least 100 conversions per variation (so we can stop the test early), or 95% significance after 200 conversions per variation.  You should also wait a minimum of 2 weeks in case there are weekend effects on the website.