Learn how to prioritize experiments, gather useful, qualitative feedback, and take real action with your data.
As we’ve reiterated in the past 7 chapters, the key to your product’s growth is retaining users. No matter how much effort you put into increasing top of funnel, if you have a leaky bucket, you simply won’t see true growth.
By now you know there’s no single silver bullet that will improve your retention overnight—but that’s okay.
Now that you’re intimately familiar with each part of the the Retention Lifecycle, you’re equipped with proven concepts and frameworks that you can use to shape your long-term retention strategy. It’s time to start putting this playbook into action.
Let’s start by recapping some of the most important concepts that we introduced in this playbook.
Counting a user as active or retained simply because they logged in or briefly opened up your app doesn’t reflect meaningful product usage. An active user should be engaging with your product in a way that allows them to get to its core value, and allows you to move the needle on your growth goals. That’s why it’s important to determine your critical event before you jump into retention analyses.
Recall from Chapter 2 that a critical event describes a user action that is significant to your product and indicates that they’re getting value from it.
Your critical event is defined by what your product is, who your users are and how you expect them to use the product. Refer back to Chapter 2 for some useful questions to ask yourself in order to determine your critical event.
|Type of Application||Example Critical Events|
|Music streaming app (e.g. Spotify)||Playing a song, creating a playlist|
|Online marketplace (e.g. Etsy)||Making a purchase|
|Messaging app (e.g. Slack)||Sending a message|
|Social media app (e.g. Facebook)||Adding friends, posting a comment|
Once you know your critical event, you should ask yourself how often you expect users to perform that event.
Not all products are built for daily use. Your retention analysis should take into account the cadence with which your users naturally use your app.
In Chapter 2, we also discussed a detailed method for determining your product usage frequency—that is, how often you expect users to come back and perform a critical event.
Most applications are built to be used on a daily, weekly, or monthly basis:
The method detailed in Chapter 2 is a great way to quantitatively determine your product usage interval, but be sure to take your own product intuition into account and adjust for that if need be.
Understanding how often users are inclined to engage meaningfully with your product is a prerequisite to using the Retention Lifecycle Framework.
At any given time, your active users are made up of:
As you can see, the product usage interval helps to define the new, current, and resurrected phases for your product’s user base. Taking some time to break down your active users into these buckets—either manually or using Amplitude Lifecycle—can be a valuable way to see what proportion of your active users make up each phase.
Your active users can flow through these three phases at any given time. By now you’re hopefully familiar with this diagram:
As we’ve said, the overall goal of the Retention Lifecycle Framework is to get more of your new and resurrected users to behave like your current users—even better if you can get your current users to become more highly engaged power users. This is because your current users are the ones who, through consistently engaging with your app, will be driving your growth as a business and increasing your revenue.
Before you can start developing and testing retention strategies based off this playbook, it’s extremely important to get a pulse on your current retention status.
There’s only one app that you need to benchmark your retention against: your own. If you can measure improvements to your retention over time, then you know you are growing. That’s why it’s critical before you start uncovering trigger points and putting retention strategies into place to make sure you measure baseline retention metrics for your current, new, and resurrected users.
After determining your critical event and product usage interval, figure out what type of retention calculation works best for you. We cover the three ways of measuring retention—N-Day retention, unbounded retention, and bracket retention—and different use cases for each in Chapter 3.
Then, based on your usage interval, create cohorts of your new, current, and resurrected users and graph a retention curve for each of these groups of users. We recommend collecting at least 3 months of data before doing this step.
Recall from Chapter 1 that the shape of these retention curves can tell you at a high-level how your retention is trending for each of these groups of users. First look at the shape of the current user retention curve—is it going straight to zero? Is it flattening off at some point? How many users are coming back at certain intervals of time? Do you need to focus on shifting the curve up, or flattening the curve?
Next, you should compare the shapes of your new user and resurrected user retention curves to see how they compare to your current users, to give you a high level view of what’s happening there.
Once you benchmark your retention metrics and diagnose any problems, it’s time to set retention goals.
Brian Balfour, CEO of Reforge and previously VP of Growth at Hubspot, likens the retention building process to constructing a machine. If you’re able to set up the right processes and incentives, then growth will basically run itself. That’s why it’s so critical to set retention goals for you and your team.
We recommend using a goal-setting system called OKRs—Objectives and Key Results—which was invented by Intel and popularized by Google. Your retention plan could look something like this:
Balfour recommends coming up with three key results separated out in terms of likelihood that they’ll succeed. You should have a 90% chance of achieving the first one, and a 50% and 10% chance of achieving the second and third, respectively. This ensures you have shorter term goals that can keep morale high but also have ‘reach’ goals that drive you and your team to go above and beyond.
There are lots of different retention tactics and strategies that you can spend time experimenting with. Using a framework like OKR allows you to stay laser-focused on your retention goals.
Once you’ve decided on your goals and the metrics that you’ll use to track the progress toward those goals, you need to make these metrics easily visible to everyone on your team. Putting these metrics on a dashboard on a big screen in your office is a great way to keep the goal top of mind for everyone and keep your team aligned.
In fact, a recent survey showed that companies who set and track key metrics are more likely to reach their goals, and that teams who track these metrics in real-time are 2x as likely to reach those goals versus those who don’t.
When one of our mobile gaming customers completed the playbook process, a few insights really stood out and resulted in experiments or product changes that they quickly implemented.
Part of the company’s revenue comes from showing ads in their mobile apps. They noticed that for new and resurrected users, seeing ads on their first day had a negative correlation with retention. However, ad impressions didn’t seem to have any impact on retention if the first ads were seen on a user’s second or third day.
They hypothesized that ads were a distraction and negative experience for new (or newly resurrected) users, but that once a user had been initially hooked on playing games, they actually didn’t mind seeing ads anymore. From this finding, the team decided to test not showing any ads to new users on their first day and to then start showing them on the second day.
During new user analysis (Chapter 6), they found that one of the behavioral drivers for successfully passing the Onboarding phase was to play at least 6 games on Day 0. As a result, the company decided to test implementing a quest-based system in the new user experience. A “quest” would encourage users to complete a string of games to unlock a reward.
Monitoring and sharing data is a crucial part of forming a data-informed company, which is why we make it easy to pin Amplitude charts to dashboards and share these with your team.
You can also share dashboards with teams and set up regular email reports to go out to the right people:
Of course, the metrics you choose to track over time depend entirely on the goals you set. We’ll provide some recommendations to choose from or adapt—the best metrics will be the ones that are custom to your business.
Any company will benefit from measuring their active user makeup using Lifecycle or a similar framework, which we discussed in Chapter 3. We recommend graphing your lifecycle breakdown and pulse metric every period and comparing how each user stage is changing period over period.
This can help you easily course-correct if you notice concerns like an increase in dormant users or a dip in current users. You can also track whether the overall health of your user base is improving as you implement your new retention strategies.
As you work on the goals of the Retention Lifecycle Framework, Lifecycle shows you how you’re doing at all 4 in a glance:
At the end of Chapters 5-7, we’ve provided some recommend metrics to track over time to see how you’re doing at improving current, new, and resurrected user retention. Here’s a summary table of those metrics:
|Retention over time of all Current Users and key current behavioral personas||Retention over time of your New Users and key new behavioral personas||Retention over time of your Resurrected Users and key new behavioral personas|
|Size and percentage breakdown of your important behavioral personas. Are you getting more people into important personas?||Bracket retention curve that follows your New > Onboarding > Value Discovery > Habit Formation phases||Downstream metrics from reengagement campaigns like retention and critical funnel conversion rate|
|Stickiness of critical events||Conversion rate through your onboarding funnel||Stickiness of critical events|
|Conversion rate over time through critical path funnel||The percentage of new users who become current users||Conversion rate over time through critical path funnel|
After working through all the chapters of the playbook, you’ll have lots of insights that lead to ideas about what might help you improve retention. How do you decide what to work on first? It’s easy to get paralyzed by all of the possibilities.
Prioritizing your growth ideas is crucial to making progress toward your goals.
There are some great resources out there on frameworks for prioritizing growth experiments, so we won’t reinvent the wheel. In particular both Brian Balfour and Sean Ellis, who is the founder of GrowthHackers.com, have shared their processes, and we highly recommend you check out their work for more details.
Ultimately, you need to pick a process that works for your company and use it to keep accountable to your goals.
As Balfour puts it: “There is no one right or perfect growth process. The important part is just to have one, stick to it, and improve it over time.”
Here are some principles, drawn mainly from Balfour’s and Ellis’ processes, which share many similar components.
First, start with a central place to keep a backlog of all of your ideas. A simple spreadsheet works great for this.
|Create push notifications for reengagement of dormant users||8||6||8|
|Enable social logins||6||5||5|
|Add onboarding step encouraging users to set a daily reminder||9||8||7|
To prioritize which ideas to work on first, you can score each idea on a few factors. Sean Ellis’ ICE score is a great way to do rate ideas on 3 key factors:
What’s your hypothesis on the impact this experiment will have? You can use quantitative data from your playbook analysis and previous experiments to back this up, or qualitative data from user feedback.
By identifying the expected outcome or value of making a change, you have something quantitative to prioritize experiments and measure your actual results against. Balfour recommends thinking about your hypotheses like this:
If successful, (variable) will increase by (impact), because (assumptions).
For retention specifically, multiply the expected increase in retention by the number of users who would be impacted—that gives you a sense of how much the change could impact your overall retention.
How sure are you that your hypothesis is correct? This one can be a little hard to decide, especially when you’re starting out, but as you run more experiments over time, it gets easier.
If you have a lot of data to back up the experiment, you’d assign a higher confidence score. If the idea is based more on a hunch or is something completely new, you’d give it a lower score.
How much work will implementing this experiment take? Think about the time it will take from each team involved, like design, marketing, product, and engineering.
Once you’ve prioritized your ideas based on impact, confidence, and ease (and any other factors that might matter to you), it’s time to design and run those experiments.
This reference provides a template of an experiment doc based on Balfour’s process, which you can make a copy of to use for your own team.
When designing your experiments, Balfour advises to come up with the ‘minimum viable test’ to understand your hypothesis. Make sure you also take into account the sample size of users you’ll need to see a significant result.
After each experiment, compare the results to your original hypothesis. How close did you get? What impact did you see on retention? Most importantly, why did you get the result that you did? Make sure to record learnings and any action items, like rolling out a positive experiment to the whole user base.
Both Ellis and Balfour recommend having a weekly growth meeting to discuss experiments, their results, and action items. In addition, they talk about the importance of keeping up a regular cadence of experiments. Ellis is a proponent of the idea of “high tempo testing”—in short, the more tests you run, the more you learn. The faster you can run tests, the faster you can learn, adjust, and ultimately drive growth.
Make sure you evaluate experiments on a regular interval and readjust your goals as necessary.
One of the customers that we’ve discussed throughout this playbook creates a mindfulness app. When they did the current user retention analysis (Chapter 5), they found that people who set a daily reminder (‘Alert Savers’) had about 3X the retention of users who did not set a daily reminder.
At the time, the Daily Reminder feature was buried down at the bottom of the Settings page—most users never even found it. Since such a small number of their users (1% of current users) were even setting an alert, they couldn’t know whether this was a causal relationship. It could be that the power users of their app, who would have been well-retained anyways, were the ones digging into the Settings page and finding the Reminders feature.
After finding this huge positive increase in retention that correlated with setting a reminder, they ran an experiment to test whether getting users to set a daily reminder would increase their retention. In the test, after a user completed their first meditation session, they were immediately shown a screen encouraging them to set a daily reminder.
People who set a reminder from the new prompt had an equal boost in retention to the users who had previously found the reminder feature on their own, indicating that the relationship between daily reminders was causative, not just correlative.
In addition, 40% of users who saw the prompt went on to set a daily reminder, so the new prompt provides a big boost to overall new user retention. Based on these results, the company rolled out the new reminder prompt to all of their users.
In Amplitude, you can view experiment results by sending relevant details, like the experiment name and experimental group, as user properties. We also integrate with popular split testing platforms like Optimizely.
Here’s a retention graph showing the difference between the Control and Variant #1.
For more details, we recommend checking out this article: How to Analyze A/B Test Results in Amplitude.
In October 2016, Amplitude launched a completely new redesign of its user interface.
One of our goals with this redesign was to make analytics more accessible to everyone across an organization, not just the head data scientist or product person. This meant really understanding why different users care about analytics and how they get what they need in Amplitude. We used both quantitative and qualitative user data to identify exactly who we were building our product for.
Quantitative data: Using Amplitude’s Personas feature (which we described in Chapter 4), we identified several different behavioral personas among our current users.
Qualitative data: We went out into the field and interviewed analytics users, asking open-ended questions like:
Through quantitative means, we grouped our users into different clusters based on the actions they performed in the platform; through qualitative means, we assigned identities and characteristics to these personas. Ultimately, this research informed the new product and design choices made in Amplitude 2.0.
Building a strong-retention product is about listening to your users, both in a qualitative and quantitative sense.
While this playbook emphasizes quantitative processes, qualitative feedback also adds value to your analyses. To holistically understand how users engage with your product, try supplementing your analytics insights with direct means of talking to your users. Some ideas include:
Quantitative and qualitative data complement each other; your behavior data can inform the type of qualitative data you seek, and your qualitative feedback can be validated (or not) with analytics.
Throughout the process of putting this playbook into action, it’s also worth thinking about ways to communicate directly with your customers and when it makes sense to do so.
How often you repeat the playbook process depends on the nature of your product and how often you update it or launch new features.
Here are some of the situations in which we recommend repeating the playbook analyses, in full or in part:
In the absence of any of these situations, we recommend monitoring your key metrics with every product release, and then running the playbook on a less frequent basis. For example, our company sets goals quarterly, so running the playbook quarterly might be a good cadence for us.
Even if you don’t run through the whole playbook, we recommend looking at your behavioral personas for new, current, and resurrected users at least once a quarter, to make you’re always up to date on how users are behaving.
By now, you’re hopefully well-equipped with the right tools and frameworks to analyze your product’s retention at all stages of the retention lifecycle. It’s now your turn to put the playbook into practice and start changing the shape of your retention curve.
Before getting to work, take a moment to:
You’re now ready to take your product’s growth to the next level!