When running 2 or more ads in parallel (which you should strive to always do), it’s often tempting to just “eyeball” the CTR results and pick an ad as The Winner.
The specific trap I’m thinking of is when one ad has a significantly higher CTR, and there are a “decent” number of clicks. (you certainly should not declare a winner when each ad has only a handful of clicks)
But the trap is when there are more clicks … but not statistically enough.
Here’s an example, from an adgroup I was looking at today:
Each ad has gotten a few clicks. One ad has 9.91% CTR, the other has 7.79% CTR.
Looks like ad 1 is the winner, right?
Well, the key is the difference in the number of clicks Ad 1 has 21 clicks. Ad 2 has 18 clicks.
Still a good “gap” in the number of clicks, right?
Well, after doing the analysis – you can use Splittester.com to do this – it actually comes in as NOT a significant difference.
The thing is that it’s really not that big of a difference in the number of clicks. It’s still within the realm of “being a fluke” that ad 1 is doing better.
You may not need to do that kind of quick analysis that often. But it comes in handy. Especially when dealing with higher-cost clicks.
If an Adgroup is humming along and getting a higher volume of low cost clicks, you can probably stick to the “eyeballing” method. If you see ad 1 has 60 clicks 8% CTR, and ad 2 has 30 clicks 4% CTR – I think we can safely say ad 1 is the winner!
If you’re paying $1 or more per click, and you’re not “made of money”, you’d be better off checking more often and doing a Statistical Significance test (and easy to do, using Splittester) to make sure you don’t run your split test too long, or cut if off too soon.