On this page

Use prediction-based cohorts in your campaigns

A cohort based on a prediction can tell you which users are most likely to convert. To benefit from that prediction, target those users with an email or advertising campaign, or personalize an experience for them. After you save a cohort from a prediction, add it to a targeting campaign.

Prediction-based cohorts work best with three common campaign types:

  • Inclusion criteria: Decide who to target in, and who to exclude from, a campaign.
  • Dynamic pricing: Adjust prices or discounts according to a user’s probability of becoming a high-LTV user.
  • Content personalization: Show the right content to the right user at the right time.

For inclusion criteria campaigns, start by excluding users unlikely to convert. This reduces unsubscribe rates in email campaigns and CPA in ads. Next, include high-probability users who are most likely to convert. This optimizes ad efficiency and rates. Finally, consider medium-probability users who are close to converting to optimize for incremental conversions.

In a dynamic pricing campaign, consider inverse pricing: give higher-probability users a lower discount or higher price, and give lower-probability users a higher discount or lower price. High-probability users are more likely to convert before a discount, so discounting them can reduce revenue.

In a content personalization campaign, predictive cohorts identify which users have the highest affinity for a product category. Use that information to show those users the content type. You can personalize content in ads, emails, or on-site, depending on the needs of your users and campaign goals.

Set up your campaign

After you decide on the campaign type and channel, sync the cohort. To sync the cohort and configure your campaign, follow these steps:

  1. From the cohort details page, click Sync and select the intended destination for the cohort. Then click Next.

  2. Select a one-time sync or schedule a recurring sync. With a recurring sync, Amplitude syncs updated user probabilities in your cohort to the destination. Then click Sync.

  3. Open your destination tool and find the synced cohort. Each platform understands and categorizes synced cohorts a little differently:

    • In Facebook, a synced cohort appears as a Custom Audience.
    • In Google, a synced cohort appears as a Customer List.
    • In Braze, a synced cohort appears as filter criteria, “Amplitude Cohorts,” within segment creation.
    • In Iterable, a synced cohort appears as a User List.
  4. Configure a separate, identical campaign for each user cohort. This lets you measure a campaign’s effect on each cohort separately. For example, in an upgrade campaign with two predictive cohorts, sync your cohorts to Braze. Choose an existing email campaign in Braze that has relevant upgrade messaging. Then duplicate the campaign: one campaign targets the top 20% cohort, and the other targets everyone else.

  5. Create a control or holdout group. This group includes users who aren't included in the campaign. A control group lets you measure whether the email campaign increased purchases relative to no campaign. The control configuration differs by platform:

Your campaign is ready to run. Let it run for one to two weeks. If you have a smaller sample size or low-conversion events, run the campaign longer to give it a better chance to reach statistical significance.

Measure your campaign’s results

After your campaign ends, analyze the results. For top-level metrics such as open rate, click rate, unsubscribe rate, and impressions, use the channel itself. Your channel reports these metrics at an aggregate campaign level for the campaign period and shows the lift relative to the control group. If the campaign generated lift, assess whether the campaign or another factor caused the lift.

You can also import campaign metrics into Amplitude if you want to analyze campaign results over time, or evaluate different conversion events at different attribution windows. (Your channel is unlikely to offer this sort of analysis.) Amplitude can import campaign data in the following ways:

  • Monitor UTM parameters automatically collected by the Amplitude SDK.
  • Set up a two-way sync with Braze or Iterable to import the data.
  • Download the control and variant segments from the channel, and then upload them as CSV cohorts into Amplitude.

After your campaign metrics are in Amplitude, first analyze whether the predictive cohorts behaved differently.

Load two segments that the campaign exposed: Predictive Cohort A, such as the top 20%, vs Predictive Cohort B, such as the bottom 80%. Then compare their event segmentation and funnel charts for conversion events. The cohorts should show some difference in behavior.

For example, in the screenshot below, engagement and conversion rates are higher for the top 20% predictive cohort.

Next, analyze whether the different user groups reacted to the campaign differently.

Load four segments: Predictive Cohort A (Variant) vs Predictive Cohort A (Control), and Predictive Cohort B (Variant) vs Predictive Cohort B (Control). Compare the difference in conversion rates between the variant and control segments of each cohort.

The difference between each set of segments represents the lift from receiving an email campaign vs not receiving it. The difference in the differences is equal to relative lift. If there is a difference in relative lift, it means a campaign has different effects on different predictive cohorts.

In the screenshot below, Predictive Cohort A has +5% lift, but Predictive Cohort B has -25% lift. This means the campaign has a positive effect on the top 20% but a negative effect on the bottom 80%. Based on these results, stop emailing the bottom 80%.

Act on your campaign results

If both cohorts have similar organic conversion rates, the underlying predictive model may be flawed. Most Amplitude predictions are statistically accurate, and you should generally expect a higher conversion rate for the top 20% cohort.

If both cohorts had positive lift, the campaign intervention has an overall positive effect on conversions, with a higher impact on one group. In an ad campaign, reduce ad bids for the audience with lower lift or stop advertising to that audience. In an email campaign, increase email frequency for the audience with higher lift. In a product-based campaign, give lower discounts to the audience with lower lift.

If one cohort had positive lift and the other had negative lift, stop emailing, advertising to, offering discounts to, or showing campaign experiences to the audience with negative lift.

If both cohorts show no lift or negative lift from a campaign, the campaign is ineffective or actively reducing conversions. Stop the campaign, then identify the problem. The issue might be the channel, campaign content, underlying cohort definition, or some combination of all three.

If both cohorts show similar lift, the campaign has an equal effect on all users, regardless of their organic conversion rates. The predictive cohort doesn't add incremental gains, and segmenting by predicted likelihood doesn't benefit this campaign in this channel.

In all scenarios, use the result to improve the next campaign. Even when the results aren't what you expected, each campaign can teach you about your users and channels.

Was this helpful?