Confirmation bias is one of the most pervasive tendencies in human nature.
Bestselling author and professor, Michael Shermer sums up the reason why we are so susceptible to confirmation bias: “Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.”
Confirmation bias is the human tendency to interpret new information as a confirmation of our existing beliefs and ignore it if it challenges our existing beliefs. For example, if you see a glowing object in the night sky and you’re a firm believer in UFOs, you might be convinced you’ve just spotted an alien spacecraft.
The presence of confirmation bias has been well-documented in everything from the 2016 U.S. election to scientific research. And product managing, growth hacking and analytics are definitely not immune to it. Here are some important examples of confirmation bias in product management and analytics and suggestions for how to avoid it.
1. You stop your A/B tests as soon as you see what you want.
You’re running an A/B test to understand which version of tool tip copy will convert more users on a new feature in your app.
You and your co-worker are having a bitter debate. You wrote one tool tip, she wrote the other. In healthy A/B test form, a friendly bet is forming. Your copy better win. You start the test and get busy on other work.
You decide to check in on the results two days later and see that your tool tip is converting users at much higher rate than that of your co-workers. YES! You stop the test, pleased at the result and at your own conversion copywriting skills, and roll out the winning tip to 100% of users.
But think twice. In stopping the test on a whim, whenever you feel like it, you are making a big mistake in not allowing the test to run through a sample size needed to reach statistical significance. Stopping the test on a whim (likely when the results confirm your desire) is choosing a winner based off of what you believe — not based on the statistically sound winner of the test.
Sample size and statistical significance calculators like the one from Evan Miller or Optimizely can help avoid this all too common bias. If you’re looking for a refresher on statistical significance, check out this article from HBR.
2. You dig up one metric to prove success (or failure).
You launch a new campaign to increase user session duration, and you’re excited about it. Wanting to see that it’s working, you start looking at your dashboards for metrics that indicate users are sticking around — engagement rate, time in app, taps on specific areas of the app, completed events — and find that a few metrics are positive. You conclude your campaign is a success!
That is, until your teammate starts point out metrics that are being negatively influenced by this campaign — a decreased 7-day retention, decrease in users completing profiles. In the big picture, your campaign is failing.
You wanted the campaign to work, so you dug up metrics to proved it did. Your co-worker thought the campaign was poorly executed and wanted it to fail, so he found metrics to prove it did. Both are instances of confirmation bias, and neither is helpful.
Related Reading: Breaking the Vanity Metric Cycle
Jared Spool, founder of User Interface Engineering, describes how a single statistic can be manipulated to fit different agendas:
Bounce rate is the most-cited statistic by people who are trying to validate their content decisions. “Our bounce rate is high, so we need to write better content.” Or, “Our bounce rate is high, which means people are coming and finding out exactly what they want. Our content’s good enough.” You pick which side of that argument you’re on, and then you can interpret bounce rate to support any argument you want.
When you look at data after you’ve made up your mind about what you want to find, you’re giving into confirmation bias every single time.
Instead, at the onset of any campaign or feature roll-out, set a clear framework to evaluate success. Come to agreement on which metrics you are tracking and why they matter to the campaign.
3. You only listen to the users you agree with.
You’re working on a product improvement project. You have a solid idea of what the project entails but want to gather more insight, so you head to the customer success world to gather user feedback.
Searching support tickets for comments on the specific product feature, you’re pleased to find numerous complaints about it. You feel you have ample evidence to that proves improvement you’re developing is right thing to be working on.
This is confirmation bias. You wanted to feel good about a feature you’re working on, so you filter customers for “feedback” – specifically for complaints to provide affirmation. There’s a problem with searching customer support tickets to fully understand sentiment. Often, support tickets will tell you how customers feel about interacting support, not about the full experience with your product.
There’s a better way to get real customer feedback, one that Hiten Shah, founder of KISSmetrics and Crazy Egg, outlined in his SaaSFest 2016 talk. The key takeaway is: use a systematic and holistic approach to listen to your customers talk about your product and your competitor’s products in many locations, not just in customer support.
When starting Crazy Egg, Shah used a landing page survey and recorded a combination of multiple choice and open-ended questions to get a full picture of how customers felt about the available products. Then, he built a product that solved their pain points and provided value his competitors didn’t, because he listened to his customers instead of using them for validation.
4. You pick bad targets for split testing.
Your team is outlining goals for marketing programs – like your weekly email newsletter. You know split testing is essential to figuring out how to improve its performance. You decide to redesign the email template with the goal of increasing click through rate. Hopefully a quick win to gain momentum and show big percentage point gains.
You set it up and send it out, and notice that one design raises your CTR by 45%! Win.
It depends. John Egan, growth Engineer at Pinterest, writes on his blog: “One of the biggest mistakes I see […] when it comes to analyzing experiments is focusing too much on percentage gains.”
That’s because percentage gains, while they can sound exciting and impressive don’t show you absolute impact, especially if you’re working with small numbers.
If 15 people clicked through before your refresh, increasing by 45% isn’t enough to move the needle. You ran the split test because you knew that you would see a result from it. In other words, you acted on a confirmation bias.
Instead of just focusing on the percentage gain, Egan suggests measuring “absolute numbers”. Absolute numbers, rather than percentage gains, can “[help you] compare and measure the true business impact of experiments […],” partially by curbing the appeal of seeing a big, empty percentage leap.
5. You inadvertently guide people through user testing.
There comes a time in every product or feature development where you bring people in to give a test run. This offers valuable feedback on how people actually use your product.
Be careful. User testing sessions are fertile grounds for confirmation bias.
Sometimes, folks who are supposed to be silent observers threw off testing results by offering to help the user. It’s only natural! You want your product to succeed so you’ll do anything you can to help people use what you’ve built.
A study from a Turkish university found that even the questions software companies test can bring about bias. We are much more likely to form questions and scenarios that confirm ideas about the product than we are to form questions that challenge them.
Biased questions is a common way for confirmation bias to make its way into research.
How to avoid it? Block the folks who built the product from running the testing session. People who design user tests should be impartial to the outcome. When writing test questions, follow these tips from User Testing to get unbiased answers. And make sure that anyone of your co-workers observing the test keeps their mouth shut during the test.
Kick confirmation bias for good
It’s easier to fall victim to confirmation bias than most of us would like to admit. We’ve probably all cherry picked metrics or customer feedback, called an A/B test early, or asked a leading question a time or two. Understanding where confirmation bias pops up is the first step in kicking it out of our workflow.
Have you experienced any of these at your job? How did you realize it and correct yourself or your co-worker? We’d love to hear how you handle confirmation bias and what you do to avoid it.