So you've decided to check your new UI/UX solution with A/B split testing? That's a very good idea! Yet, the sequence of A/B test actions is normally as follows: you create 2 versions of whatever you want to test (e.g., landing page or CTA banner) and expect one version to demonstrate a higher conversion rate. Tester, software developer, designer and project manager grab some popcorn and wait for the skyrocketing results. And this is the number one killer of your A/B split testing campaign! Let's see below why. Read about Chicago mobile app development.
1. Your focus group isn't large enough!
It's not a rocket science to estimate the size of your required test sample. There's a formula to help determine the right one:
where N is the number of people in your focus group, p is a baseline conversion rate, c is a minimum detectable effect, and 16 is a constant coefficient that ensures 95% of trust in test result.
Check out a related article:
For instance, with the current conversion rate of 15% and minimum detectable effect of 5% (either up or down), we'll get the following result:
It means you should have 816 participants in each variation of your A/B test. It'll allow you to see statistical significance change beyond the range of 10% to 20% of baseline conversion. If you want to see 1% change (from 14% to 16%), you'll need 20,400 persons in each variation. You can try this calculator to estimate your A/B sample size.
2. You don't analyze your traffic sources!
Pay close attention to traffic sources for your A/B split test. Different traffic sources have inherently different motivations. For instance, if one focus group gets more users from one traffic source, and another focus group gets more users from a different one, the difference in test results will be attributed to the initial user motivation rather than your new UI solution. You should either limit your test to just one traffic source (e.g., new direct traffic) or analyze results within each traffic source at the end of the A/B split test.
3. Your A/B test duration is too short!
You should spend as much time doing your A/B split test as is needed to run it against all user types you have. The minimum time requirement is one week. It's very likely that during a week all user types will see your test variations.
4. You show different data to different users!
All of your focus group participants should see the same content / data during your campaign. For example, in eCommerce the content is comprised of prices, delivery terms, and goods availability in stock. Of course, these data can vary from location to location due to different vendors, logistics and other factors. Let's say you're selling products online across a huge territory and therefore you have different prices and delivery terms. It's recommended that you choose just one location out of many for your A/B split test to ensure all focus group participants see the same information and make the unbiased decisions.
5. You don't know exactly what to improve!
You can only do an effective A/B split test if you have a clear understanding of what needs to be improved in your UI to increase traffic and sales. If you aren't sure of what UI features need improvement and want to run your A/B split just as a tick-off, you better do something else! In the best case scenario you'll just waste your time. In the worst case scenario - you'll make the wrong conclusions and harm your online performance.
Check out a related article:
Also, stay away from A/B testing if you only want to check one parameter. Better do it when you have a whole scope of parameters to test.
In any case, set your goals properly prior to launching your A/B split test campaign. Don't expect adding more text to product description to lead to higher conversion. Yet, expect increase in the average time users spend on your website! Your FAQ may not increase sales, but will most likely relieve workload off your help desk.
And what other A/B test killers do you know?
Sources: https://siliconrus.com/2015/01/ab-errors/; image - ShutterStock