How Product Analytics Will Tell You Which Customers You Should Get Feedback From

Getting feedback from the right users is crucial for getting focus across your team about what needs to improve.

January 26, 2017
Image of Alicia Shiu
Alicia Shiu
Growth Product Manager
How Product Analytics Will Tell You Which Customers You Should Get Feedback From

Product managers are always getting told to talk to customers more. That’s simple enough to do when you’re first getting started, but gets harder and harder over time. As your customer base gets bigger and the number of things you have to do grows exponentially, picking the right set of customers to talk to becomes a challenge.

“It was easy in the beginning, because we knew most of the people using the tool as we worked on the initial version,” says HubSpot’s Dan Wolchonok.

“As we got bigger, my feelings usually boiled down to these four words: talk to customers more. I think a critical skill, however, is learning how to talk to the right customers.”

To figure out who to talk to, Dan Wolchonok used behavioral analytics to narrow down his customer base. Rather than get a random subset, or allow only the loudest customers to have their voices heard, Wolchonok sought out the exact customers he needed to solve his most pressing product issues.

Identify your goals

The first step, according to Wolchonok, is defining what part of your product you want to understand better. What are you primarily concerned with?

This could be anything from how well you’re converting a particular subset of customers from trial to paid, to why certain types of customers are just doing “drive-bys” of your app and not sticking around.

If you can express your goals in terms of user actions within the product, you can generate a representative behavioral cohort. You can then dig into this cohort to identify the customers you know have had the kind of experience you want to understand better.

Let’s walk through some common retention and activation challenges to see how behavioral cohorts can help get us closer to useful feedback.


Engaged users who leave

Sometimes, people who seem like they should be long-term users of your product stop using it. You want to find out why.

To get to the bottom of this retention problem, you need to first determine the engagement metric you want to analyze. If your app is based on video sharing, maybe you define engagement as a user playing a video more than five times.

You’d then create a behavioral cohort based on that engagement metric:

Amplitude behavioral cohort definition

Next, you’d create a retention chart. The chart shows you users who played videos >= 5 times, but still wound up churning within their first 30 days.

Amplitude retention chart

This isolates your target chunk of users who were engaged at one point but eventually left your app. Now you can reach out specifically to these users and find out what it was that drove them away.

Every app has “power users”—people who use a product way more than the average user. These are incredibly useful people to speak to, especially in the early days, because they can clue you into what’s truly valuable about your product.

For any app, most users who sign up on a given day wind up churning in the first few days after signup. To find your power users, create a cohort of users made up of those who are still heavily active well after that churn period.

In this example, we look at the Stickiness chart for the action “Share Content” and we see that around 60% of users perform the action 2 days a week, a little over 25% perform it 3 days a week, and less than 10% perform it 4 days a week.

stickiness chart

These numbers are sizable. But then there is a small slice of the user base that is performing “Share Content” far more than other users—0.0252% are doing it every single day of the week.

If you’re looking to build a stickier product, then the people who are already experiencing your product as very sticky are the exact kinds of people you want to talk to. If you’re able to get a deep understanding of what gets people hooked on your product, you can set out to build a product that’s more engaging for everyone.


Funnel drop-off

One of the biggest hurdles for any product with a free trial period is activation. When people sign up for a free trial but don’t actually use your product as intended, it can be useful to reach out to them and ask why. It could be that you’re pursuing new customers through the wrong channels, or not adequately explaining the value of your product when they first sign up.

One way to find these customers is to do a simple funnel analysis that runs from the start of a trial to the purchase of a subscription, with key events in-between. Let’s say we have a subscription social news app with a free trial period where users can read a certain amount of articles and share them with friends without paying.

By setting up a funnel with “Click Article” and “Connected Social Account” in between “Trial Start” and “Purchase Subscription,” we can see how efficiently users are moving through the process—and zero in on the cohorts we want to learn more about.

For instance, we could choose to email or message only the users who haven’t clicked on a single article despite signing up for the free trial:

funnel analysis

We could also reach out to those who chose not to connect social accounts or those who didn’t wind up purchasing subscriptions. It’s all about knowing what kinds of insights you want to get out of your users and then setting up the behavioral cohorts that you think will get you the right information.

Funnel dropouts

Sometimes free trial users appear to be getting value out of your product, but at some point along the way they drop out of the funnel and never complete signup.

For example, let’s say you have a CRM product. You’ve found that the people who set up more than three actual records during their trials often end up buy a subscription.

When users have already started using your product in a work capacity but wind up not signing up for your trial, you need to know what happened—whether the product stopped working as expected and they got frustrated, or whether it was never the right fit in the first place. These users are demoing your product and explicitly rejecting it—there’s a lot to learn here.

With Amplitude, you can set up a trial-to-purchase funnel and segment it with behavioral cohorts.

trial to purchase funnel

In the example above, we’re seeing the trial-to-purchase funnel segmented by users who set up more than 3 CRM records and users who didn’t. We can easily zero in on the users who set up records but didn’t sign up and reach out to them with questions.

Put your user feedback to work

With Amplitude, you can export all users that you have in a specific cohort into one of Amplitude’s messaging partners (Kahuna, Appboy, Urban Airship) or as a CSV that you can use as an email list. That gives you a wide range of options, from push notifications to in-app messages, that you can use to ask those customers questions and learn more about their experience.

On the internal side, you might want to take what you’ve learned from talking to your customers and put it to work helping your team understand what your biggest product challenges really are. At HubSpot, Dan Wolchonok put together one graph showing the different reasons he was given for users churning out in their first week—by ranking each one, it was clear that HubSpot’s priorities were explaining the product and its value better:

drive by reasons

Getting feedback from the right users is crucial for getting focus across your team about what needs to improve. Using cohorts to identify the users that will provide valuable feedback turns your customer base from a black box to a source of your biggest growth opportunity.

About the Author
Image of Alicia Shiu
Alicia Shiu
Growth Product Manager
Alicia is a Growth Product Manager at Amplitude, where she works on projects and experiments spanning top of funnel, website optimization, and the new user experience. Prior to Amplitude, she worked on biomedical & neuroscience research (running very different experiments) at Stanford.