Mastering Retention
Retention is critical for every product, whether you’re at a Fortune 500 company or a 5-person startup. Learn proven methods for building a data-informed retention strategy.
What To Do Next? Put Your Retention Work Into Action to Get Results
Learn how to prioritize experiments, gather useful, qualitative feedback, and take real action with your data.
As we’ve reiterated in the past 7 chapters, the key to your product’s growth is retaining users. No matter how much effort you put into increasing top of funnel, if you have a leaky bucket, you simply won’t see true growth.
By now you know there’s no single silver bullet that will improve your retention overnight—but that’s okay.
Now that you’re intimately familiar with each part of the the Retention Lifecycle, you’re equipped with proven concepts and frameworks that you can use to shape your long-term retention strategy. It’s time to start putting this playbook into action.
8.1 | Recap: The Retention Lifecycle Framework
Let’s start by recapping some of the most important concepts that we introduced in this playbook.
Defining your critical event
Counting a user as active or retained simply because they logged in or briefly opened up your app doesn’t reflect meaningful product usage. An active user should be engaging with your product in a way that allows them to get to its core value, and allows you to move the needle on your growth goals. That’s why it’s important to determine your critical event before you jump into retention analyses.
Recall from Chapter 2 that a critical event describes a user action that is significant to your product and indicates that they’re getting value from it.
Your critical event is defined by what your product is, who your users are and how you expect them to use the product. Refer back to Chapter 2 for some useful questions to ask yourself in order to determine your critical event.
Once you know your critical event, you should ask yourself how often you expect users to perform that event.
Determining your product usage interval
Not all products are built for daily use. Your retention analysis should take into account the cadence with which your users naturally use your app.
In Chapter 2, we also discussed a detailed method for determining your product usage frequency—that is, how often you expect users to come back and perform a critical event.
Most applications are built to be used on a daily, weekly, or monthly basis:
- A mobile game, social media, or messaging app typically has daily usage.
- A meditation or music streaming app may have weekly usage.
- An on-demand delivery app may have monthly usage.
The method detailed in Chapter 2 is a great way to quantitatively determine your product usage interval, but be sure to take your own product intuition into account and adjust for that if need be.
Understanding how often users are inclined to engage meaningfully with your product is a prerequisite to using the Retention Lifecycle Framework.
The Retention Lifecycle Framework
At any given time, your active users are made up of:
- New Users: Users who are new to your app and are using it for the first time. For a weekly usage app, new users are those who are in their first week of using the app.
- Current Users: Users who have been consistently engaging with your app. For a weekly usage app, current users are those who are active this week and last week.
- Resurrected Users: Users who used to consistently engage with your app, became dormant (inactive) for some time, and then became active again. For a weekly usage app, resurrected users are those who were active at least two weeks prior, inactive last week (the dormant period), but are active again this week.
As you can see, the product usage interval helps to define the new, current, and resurrected phases for your product’s user base. Taking some time to break down your active users into these buckets—either manually or using Amplitude Lifecycle—can be a valuable way to see what proportion of your active users make up each phase.
Your active users can flow through these three phases at any given time. By now you’re hopefully familiar with this diagram:
As we’ve said, the overall goal of the Retention Lifecycle Framework is to get more of your new and resurrected users to behave like your current users—even better if you can get your current users to become more highly engaged power users. This is because your current users are the ones who, through consistently engaging with your app, will be driving your growth as a business and increasing your revenue.
Before you can start developing and testing retention strategies based off this playbook, it’s extremely important to get a pulse on your current retention status.
8.2 | Next steps: your retention diagnostic
Benchmarking your retention
There’s only one app that you need to benchmark your retention against: your own. If you can measure improvements to your retention over time, then you know you are growing. That’s why it’s critical before you start uncovering trigger points and putting retention strategies into place to make sure you measure baseline retention metrics for your current, new, and resurrected users.
After determining your critical event and product usage interval, figure out what type of retention calculation works best for you. We cover the three ways of measuring retention—N-Day retention, unbounded retention, and bracket retention—and different use cases for each in Chapter 3.
Then, based on your usage interval, create cohorts of your new, current, and resurrected users and graph a retention curve for each of these groups of users. We recommend collecting at least 3 months of data before doing this step.
Recall from Chapter 1 that the shape of these retention curves can tell you at a high-level how your retention is trending for each of these groups of users. First look at the shape of the current user retention curve—is it going straight to zero? Is it flattening off at some point? How many users are coming back at certain intervals of time? Do you need to focus on shifting the curve up, or flattening the curve?
Next, you should compare the shapes of your new user and resurrected user retention curves to see how they compare to your current users, to give you a high level view of what’s happening there.
Once you benchmark your retention metrics and diagnose any problems, it’s time to set retention goals.
Setting concrete retention goals
Brian Balfour, CEO of Reforge and previously VP of Growth at Hubspot, likens the retention building process to constructing a machine. If you’re able to set up the right processes and incentives, then growth will basically run itself. That’s why it’s so critical to set retention goals for you and your team.
We recommend using a goal-setting system called OKRs—Objectives and Key Results—which was invented by Intel and popularized by Google. Your retention plan could look something like this:
- State your goals
- Set a timeframe (30 to 90 days)
- Assign three key results that you want to achieve related to retention. Each needs to be an objective you can measure:
- Improve retention by 20%
- Improve retention by 2x
- Improve retention by 10x
- Brainstorm the actionable objectives you’ll use to hit your key results
Balfour recommends coming up with three key results separated out in terms of likelihood that they’ll succeed. You should have a 90% chance of achieving the first one, and a 50% and 10% chance of achieving the second and third, respectively. This ensures you have shorter term goals that can keep morale high but also have ‘reach’ goals that drive you and your team to go above and beyond.
There are lots of different retention tactics and strategies that you can spend time experimenting with. Using a framework like OKR allows you to stay laser-focused on your retention goals.
The importance of monitoring key metrics
Once you’ve decided on your goals and the metrics that you’ll use to track the progress toward those goals, you need to make these metrics easily visible to everyone on your team. Putting these metrics on a dashboard on a big screen in your office is a great way to keep the goal top of mind for everyone and keep your team aligned.
In fact, a recent survey showed that companies who set and track key metrics are more likely to reach their goals, and that teams who track these metrics in real-time are 2x as likely to reach those goals versus those who don’t.
Case Study: Mobile gaming company—’Quest’ Mode and Ad Ramp Up
When one of our mobile gaming customers completed the playbook process, a few insights really stood out and resulted in experiments or product changes that they quickly implemented.
When to show ads?
Part of the company’s revenue comes from showing ads in their mobile apps. They noticed that for new and resurrected users, seeing ads on their first day had a negative correlation with retention. However, ad impressions didn’t seem to have any impact on retention if the first ads were seen on a user’s second or third day.
They hypothesized that ads were a distraction and negative experience for new (or newly resurrected) users, but that once a user had been initially hooked on playing games, they actually didn’t mind seeing ads anymore. From this finding, the team decided to test not showing any ads to new users on their first day and to then start showing them on the second day.
Getting new users to play 6 games
During new user analysis (Chapter 6), they found that one of the behavioral drivers for successfully passing the Onboarding phase was to play at least 6 games on Day 0. As a result, the company decided to test implementing a quest-based system in the new user experience. A "quest" would encourage users to complete a string of games to unlock a reward.
Create, share, and subscribe to dashboards in Amplitude
Monitoring and sharing data is a crucial part of forming a data-informed company, which is why we make it easy to pin Amplitude charts to dashboards and share these with your team.
You can also share dashboards with teams and set up regular email reports to go out to the right people:
Of course, the metrics you choose to track over time depend entirely on the goals you set. We’ll provide some recommendations to choose from or adapt—the best metrics will be the ones that are custom to your business.
Retention Lifecycle Breakdown—Lifecycle & Pulse
Any company will benefit from measuring their active user makeup using Lifecycle or a similar framework, which we discussed in Chapter 3. We recommend graphing your lifecycle breakdown and pulse metric every period and comparing how each user stage is changing period over period.
This can help you easily course-correct if you notice concerns like an increase in dormant users or a dip in current users. You can also track whether the overall health of your user base is improving as you implement your new retention strategies.
As you work on the goals of the Retention Lifecycle Framework, Lifecycle shows you how you’re doing at all 4 in a glance:
- Activating new users to current users
- Retaining current users
- Resurrecting dormant users
- Reactivating resurrected users to current users
Retention lifecycle metrics to track over time
At the end of Chapters 5-7, we’ve provided some recommend metrics to track over time to see how you’re doing at improving current, new, and resurrected user retention. Here’s a summary table of those metrics:
8.3 | Prioritizing experiments
After working through all the chapters of the playbook, you’ll have lots of insights that lead to ideas about what might help you improve retention. How do you decide what to work on first? It’s easy to get paralyzed by all of the possibilities.
Prioritizing your growth ideas is crucial to making progress toward your goals.
There are some great resources out there on frameworks for prioritizing growth experiments, so we won’t reinvent the wheel. In particular both Brian Balfour and Sean Ellis, who is the founder of GrowthHackers.com, have shared their processes, and we highly recommend you check out their work for more details.
Ultimately, you need to pick a process that works for your company and use it to keep accountable to your goals.
As Balfour puts it: "There is no one right or perfect growth process. The important part is just to have one, stick to it, and improve it over time."
Here are some principles, drawn mainly from Balfour’s and Ellis’ processes, which share many similar components.
1) Brainstorm and keep a backlog of ideas
First, start with a central place to keep a backlog of all of your ideas. A simple spreadsheet works great for this.
2) Prioritize
To prioritize which ideas to work on first, you can score each idea on a few factors. Sean Ellis’ ICE score is a great way to do rate ideas on 3 key factors:
Impact
What’s your hypothesis on the impact this experiment will have? You can use quantitative data from your playbook analysis and previous experiments to back this up, or qualitative data from user feedback.
By identifying the expected outcome or value of making a change, you have something quantitative to prioritize experiments and measure your actual results against. Balfour recommends thinking about your hypotheses like this:
If successful, (variable) will increase by (impact), because (assumptions).
For retention specifically, multiply the expected increase in retention by the number of users who would be impacted—that gives you a sense of how much the change could impact your overall retention.
Confidence
How sure are you that your hypothesis is correct? This one can be a little hard to decide, especially when you’re starting out, but as you run more experiments over time, it gets easier.
If you have a lot of data to back up the experiment, you’d assign a higher confidence score. If the idea is based more on a hunch or is something completely new, you’d give it a lower score.
Ease
How much work will implementing this experiment take? Think about the time it will take from each team involved, like design, marketing, product, and engineering.
3) Run experiments
Once you’ve prioritized your ideas based on impact, confidence, and ease (and any other factors that might matter to you), it’s time to design and run those experiments.
This reference provides a template of an experiment doc based on Balfour’s process, which you can make a copy of to use for your own team.
When designing your experiments, Balfour advises to come up with the ‘minimum viable test’ to understand your hypothesis. Make sure you also take into account the sample size of users you’ll need to see a significant result.
4) Analyze and share results
After each experiment, compare the results to your original hypothesis. How close did you get? What impact did you see on retention? Most importantly, why did you get the result that you did? Make sure to record learnings and any action items, like rolling out a positive experiment to the whole user base.
5) Keep up a cadence
Both Ellis and Balfour recommend having a weekly growth meeting to discuss experiments, their results, and action items. In addition, they talk about the importance of keeping up a regular cadence of experiments. Ellis is a proponent of the idea of "high tempo testing"—in short, the more tests you run, the more you learn. The faster you can run tests, the faster you can learn, adjust, and ultimately drive growth.
Make sure you evaluate experiments on a regular interval and readjust your goals as necessary.
Case Study: Mindfulness app: increasing retention 3x
One of the customers that we’ve discussed throughout this playbook creates a mindfulness app. When they did the current user retention analysis (Chapter 5), they found that people who set a daily reminder (‘Alert Savers’) had about 3X the retention of users who did not set a daily reminder.
At the time, the Daily Reminder feature was buried down at the bottom of the Settings page—most users never even found it. Since such a small number of their users (1% of current users) were even setting an alert, they couldn’t know whether this was a causal relationship. It could be that the power users of their app, who would have been well-retained anyways, were the ones digging into the Settings page and finding the Reminders feature.
Prompting more users to save a daily reminder
After finding this huge positive increase in retention that correlated with setting a reminder, they ran an experiment to test whether getting users to set a daily reminder would increase their retention. In the test, after a user completed their first meditation session, they were immediately shown a screen encouraging them to set a daily reminder.
The Result?
People who set a reminder from the new prompt had an equal boost in retention to the users who had previously found the reminder feature on their own, indicating that the relationship between daily reminders was causative, not just correlative.
In addition, 40% of users who saw the prompt went on to set a daily reminder, so the new prompt provides a big boost to overall new user retention. Based on these results, the company rolled out the new reminder prompt to all of their users.
Measuring experiment results in Amplitude
In Amplitude, you can view experiment results by sending relevant details, like the experiment name and experimental group, as user properties. We also integrate with popular split testing platforms like Optimizely.
Here’s a retention graph showing the difference between the Control and Variant #1.
For more details, we recommend checking out this article: How to Analyze A/B Test Results in Amplitude.
Example: Building Amplitude 2.0 with quantitative and qualitative data
In October 2016, Amplitude launched a completely new redesign of its user interface.
One of our goals with this redesign was to make analytics more accessible to everyone across an organization, not just the head data scientist or product person. This meant really understanding why different users care about analytics and how they get what they need in Amplitude. We used both quantitative and qualitative user data to identify exactly who we were building our product for.
Quantitative data: Using Amplitude’s Personas feature (which we described in Chapter 4), we identified several different behavioral personas among our current users.
Qualitative data: We went out into the field and interviewed analytics users, asking open-ended questions like:
- How does analytics affect your day?
- What do you use data for?
- What’s your typical analytics workflow?
Through quantitative means, we grouped our users into different clusters based on the actions they performed in the platform; through qualitative means, we assigned identities and characteristics to these personas. Ultimately, this research informed the new product and design choices made in Amplitude 2.0.
8.4 | Qualitative feedback matters
Building a strong-retention product is about listening to your users, both in a qualitative and quantitative sense.
While this playbook emphasizes quantitative processes, qualitative feedback also adds value to your analyses. To holistically understand how users engage with your product, try supplementing your analytics insights with direct means of talking to your users. Some ideas include:
- Conducting user interviews to understand common flows users take through the product and to identify different behavioral personas
- Organizing focus groups to test out a new feature or service
- Sending feedback surveys to current and/or dormant users
- Directly talking to users who drop off at certain points in your critical path funnel to understand why
Quantitative and qualitative data complement each other; your behavior data can inform the type of qualitative data you seek, and your qualitative feedback can be validated (or not) with analytics.
Throughout the process of putting this playbook into action, it’s also worth thinking about ways to communicate directly with your customers and when it makes sense to do so.
8.5 | Frequency of repeating this playbook
How often you repeat the playbook process depends on the nature of your product and how often you update it or launch new features.
Here are some of the situations in which we recommend repeating the playbook analyses, in full or in part:
- You launch a major product update or new feature.
- If your product has seasonality (for example, a product that gets much heavier usage during the school year and less during the summer), you might run the playbook process for different times of the year: summer, beginning of the school year, and sometime in the middle during peak usage.
- You gain a significant new source of users (for example, you start a referral program or start advertising on Twitter), and want to understand how those users behave and retain relative to others.
In the absence of any of these situations, we recommend monitoring your key metrics with every product release, and then running the playbook on a less frequent basis. For example, our company sets goals quarterly, so running the playbook quarterly might be a good cadence for us.
Even if you don’t run through the whole playbook, we recommend looking at your behavioral personas for new, current, and resurrected users at least once a quarter, to make you’re always up to date on how users are behaving.
8.6 | Take action
By now, you’re hopefully well-equipped with the right tools and frameworks to analyze your product’s retention at all stages of the retention lifecycle. It’s now your turn to put the playbook into practice and start changing the shape of your retention curve.
Before getting to work, take a moment to:
- Review the case studies in this chapter for examples of how our customers have utilized the whole playbook
- Review the main concepts of each chapter
- Complete the worksheets for each chapter
You’re now ready to take your product’s growth to the next level!
Further Reading
The Amplitude Guide to Customer Retention: 40+ Resources to Increase Retention
The Amplitude Team
The Amplitude Team