If you have ever run a Google Website Optimizer test, you have probably heard a similar question at the end of the test, when a winner is declared:
“How do I know it was really the winner and not just chance that visitors converted from that page?”
And there is probably a hundred “excuses” someone can give as to why the winning page isn’t really the winner. Here are a few:
- Obviously more qualified traffic went to the winning page than the other pages.
- Conversions happened on every page, it is just chance that more converted on the one page.
- There should always be a difference in conversion rate no matter what page is presented; it depends on the visitor not the page.
There is probably some truth to all of these statements, but unless you interview every visitor to each page, you cannot prove or disprove any of these statements. But one helpful test can help you take at least some of the guess work: Null testing. In other words, test “nothing”.
Prior to the “real” test, make an exact duplicate of the landing page in an A/B test or the variations in a multivariate test and set up a test with the duplicate running against the original. With all variables being the same, any difference in conversion rate between the pages should be considered the average difference. Apply this difference to the results of the “real” test.
If the difference in conversion rate in null testing is 2.5%, we would anticipate seeing that at least that same amount of difference in conversion rate in an actual test and would not be considered conclusive. Any increase above 2.5% should be considered a true increase in conversion.
By taking into consideration natural differences based on traffic and visitor engagement, we can take some of the “chance” thinking in the test results.