Correlation vs Causation: Understand the Difference for Your Product

Correlation and causation can exist simultaneously, but correlation doesn't mean causation. Learn how to test for causation with experimentation.

Best Practices
September 20, 2019
Image of Archana Madhavan
Archana Madhavan
Instructional Designer
Correlation vs. Causation Experiment

Correlation and causation can seem deceptively similar, but recognizing their differences is crucial to understanding relationships between variables. In this article, we’ll give you a clear definition of the difference between causation and correlation.

Next, we’ll focus on correlation and causation specifically for building digital products and understanding user behavior. Product managers, data scientists, and analysts will find this helpful for leveraging the right insights to increase product growth, such as whether certain features impact customer retention or engagement. Understanding correlation versus causation can be the difference between wasting efforts on low-value features and creating a product that your customers can’t stop raving about.

And even if you’re not in the product world, we think you’ll benefit from understanding how to tell the difference between correlation and causation.

After reading this article, you will:

  • Understand what correlation is
  • Understand what causation is
  • Know the key differences between correlation and causation
  • Know two robust solutions you can use to test for causation

What’s the difference between correlation and causation?

While causation and correlation can exist simultaneously, correlation does not imply causation. Causation means one thing causes another—in other words, action A causes outcome B. On the other hand, correlation is simply a relationship where action A relates to action B—but one event doesn’t necessarily cause the other event to happen.

Amplitude's blog image

In this example, there’s a correlation between eating ice cream and getting sunburned because the two events are related. But neither event actually causes the other. Instead, both events are caused by something else—sunny weather.

Correlation and causation are often confused because the human mind likes to find explanations for seemingly related events, even when they do not exist. We often fabricate these explanations when two variables appear to be so closely associated that one is dependent on the other. That would imply a cause-and-effect relationship, where one event is the result of another event.

However, we cannot simply assume causation even if we see two events happening, seemingly together, before our eyes. Why? First, our observations are purely anecdotal. Second, there are several other possibilities for an association, including:

  • The opposite is true: B actually causes A.
  • The two are correlated, but there’s more to it: A and B are correlated, but they’re actually caused by C.
  • There’s another variable involved: A does cause B—as long as D happens.
  • There is a chain reaction: A causes E, which leads E to cause B (but you only saw that A causes B from your own eyes).

An example of correlation vs. causation in product analytics

You might expect to find causality in your product, where specific user actions or behaviors result in a particular outcome.

Picture this: You just launched a new version of your music-streaming mobile app. You hypothesize that customer retention for your product is linked to in-app social behaviors. You ask your team to develop a new feature that allows users to join “communities.”

A month after you release your new communities feature, adoption sits at about 20% of all users. You’re curious whether communities impact retention, so you create two equally-sized groups (cohorts) with randomly selected users. One cohort only has users who joined a community, and the other only has users who did not join a community.

Your analysis reveals a shocking finding: Users who joined at least one community have higher retention than those who did not join a community.

Amplitude's blog image

Amplitude’s Retention Analysis chart. Try creating one yourself with our free self-service demo.

In the chart above, nearly 95% of those who joined a community (blue) are still around in Week 2 compared to 55% of those who did not (green). By Week 7, you see 85% retention for those who joined a community and 25% retention for those who did not. These results seem like a massive coup.

But hold on. The logical part of you knows that you don’t have enough information to conclude whether joining communities causes better retention. All you know is that the two are correlated. In fact, they could both be caused by some other unknown factor.

Amplitude's blog image

In this example, joining communities and higher retention are correlated, but there could be a third factor causing both.

How to test for causation in your product

Causal relationships don’t happen by accident.

It might be tempting to associate two variables as “cause and effect.” But doing so without confirming causality in a robust analysis can lead to a false positive—a causal relationship seems to exist but isn’t actually there. A false positive can occur if you don’t extensively test the relationship between a dependent and an independent variable.

False positives are problematic for product insights because you might incorrectly think you understand the link between important outcomes and user behaviors. For example, you might think you know which key activation event results in long-term user retention, but without rigorous testing, you risk making critical product decisions based on the wrong user behavior.

Run robust experiments to determine causation

Once you find a correlation, you can test for causation by running experiments that “control the other variables and measure the difference.”

You can use these two experiments or analyses to identify causation within your product:

  • Hypothesis testing
  • A/B/n experiments

1. Hypothesis testing

The most basic hypothesis test will involve an H0 (null hypothesis) and an H1 (your primary hypothesis). You can also have a secondary hypothesis, tertiary hypothesis, and so on.

The null hypothesis is the opposite of your primary hypothesis. Why? While you cannot prove your primary hypothesis with 100% certainty (the closest you can get is 99%), you can disprove your null hypothesis.

The primary hypothesis points to the causal relationship you’re researching and should identify a cause (independent variable or exposure variable) and an effect (dependent variable or outcome variable).

It’s best to first create your H1, then specify its opposite and use that for your H0. Your H1 should identify the relationship you’re expecting between your independent and dependent variables.

If we use the previous example and look at the impact of in-app social features on retention, your independent variable would be “joining a community,” and your dependent variable would be “retention.” Your primary hypothesis might be:

H1: If a user joins a community within our product in the first month, then they will remain a customer for more than one year.

Then, negate your H1 to generate your null hypothesis:

H0: There is no relationship between joining a community and user retention.

The goal is to observe whether there is an actual difference between your different hypotheses. If you can reject the null hypothesis with statistical significance (ideally with a minimum of 95% confidence), you are closer to understanding the relationship between your independent and dependent variables.

In the music-streaming example above, if you can reject the null hypothesis by finding that joining a community resulted in higher retention rates (while adjusting for confounding variables that could influence your results), then you can likely conclude that there is some relationship between joining a community and user retention.

To test this hypothesis, develop an equation that accurately reflects the relationship between your expected cause (independent or exposure variable) and effect (dependent or outcome variable). If your model allows you to plug in a value for your exposure variable and consistently return an outcome that reflects actual observed data, you are probably onto something.

When to use hypothesis testing

Hypothesis testing is helpful when trying to identify whether a relationship exists between two variables, rather than looking at anecdotal evidence. You might want to look at historical data to run a longitudinal analysis of changes over time. For example, you might investigate whether first adopters for product launches are your biggest promoters. You can look at referral patterns and also compare this relationship to product launches over time.

Or, you might run a cross-sectional analysis that analyzes a snapshot of data. This analysis is helpful when looking at the effects of a specific exposure and outcome, rather than trend changes over a period. For example, you might explore the relationship between holiday-specific promotions and sales.

2. A/B/n Experimentation

A/B/n testing, or split testing, can bring you from correlation to causation. Look at each of your variables, change one so you have different versions (variant A and variant B), and see what happens. If your outcome consistently changes (with the same trend), you’ve found the variable that makes the difference.

Amplitude's blog image

Two variants for a website layout—variant A and variant B

When making a case that joining a community leads to higher retention rates, you must eliminate all other variables that could influence the outcome. For example, users could have taken a different path that ultimately affected retention.

To test whether there’s causation, you’ll have to find a direct link between users joining a community and using your app long-term.

Start with your onboarding flow. For the next 1,000 users who sign up, split them into two groups. Force the first half to join a community when they sign up (variant A) and the other half not to (variant B). Run the experiment for 30 days using an experimentation tool like Amplitude Experiment, then compare retention rates between the two groups.

Suppose you find that the group forced to join a community has a relatively higher retention rate. In that case, you have the evidence to confirm a causal relationship between joining a community and retention. This relationship is probably worth digging into with a product analytics tool like Amplitude Analytics to understand why communities drive retention.

You can’t be confident of a causal relationship until you run these types of experiments.

When to use A/B/n testing

A/B/n is ideal when comparing the impact of different variations—variant A and variant B—for campaigns, product features, content strategies, and more. For example, a split test of your product’s onboarding flow might compare how different product strategies perform based on certain characteristics, including:

  • Copy variations
  • Graphics (stock photos vs. custom illustrations)
  • Reducing the number of fields in a sign-up form
  • Personalization (name, company, and industry details)

After running multiple product onboarding variations, you can look at the results and compare metrics such as drop-off rate, conversion, and retention.

Learn more about metrics you can track in The Amplitude Guide to Product Metrics.

Act on the right correlations for sustained product growth

We’re always looking for explanations around us and trying to interpret what we see. However, unless you can clearly identify causation, you should assume that you only see a correlation.

Events that seem to connect based on common sense can’t be seen as causal unless you can prove a clear and direct connection. And while causation and correlation can exist simultaneously, correlation doesn’t mean causation.

The more adept you become at identifying true correlations within your product, the better you’ll be able to prioritize your product investments and improve retention. Read our Mastering Retention Playbookfor expert advice on tools, strategies, and real-world examples for growing your product with a strong retention strategy.

References

Product Metrics CTA
About the Author
Image of Archana Madhavan
Archana Madhavan
Instructional Designer
Archana is an Instructional Designer on the Customer Education team at Amplitude. She develops educational content and courses to help Amplitude users better analyze their customer data to build better products.

More Best Practices
Image of Pragnya Paramita
Pragnya Paramita
Group Product Marketing Manager, Amplitude
Image of Pragnya Paramita
Pragnya Paramita
Group Product Marketing Manager, Amplitude
Image of Darshil Gandhi
Darshil Gandhi
Principal Product Marketing Manager, Amplitude