How I Amplitude Series

7 Lessons from 7 Years Using Amplitude

Saish Redkar has been a cornerstone of the Amplitude community since 2017. In this edition of "How I Amplitude," Saish shares 7 lessons learned after 7 years of using Amplitude.

Company logo

“Product analytics is a team sport. If you don’t pass and share, no one wins. Make sure to pass and share insights across all teams.”

author photo
Saish Redkar
DataRobot
Senior Product Manager

Quick Amplitude Tips

Based on the different roles I’ve played over the years, I wanted to highlight some undertutilized features in Amplitude, which I’ve found useful.

If you want to surface insights about your actual Amplitude use, then the Dashboard REST API is the tool to use. You can call it via Python or any HTTP script-writing tool and plug in numbers from a raw JSON response. From that you can create usage reports that offer a goldmine of info on what users consume inside Amplitude. These help you create more charts relevant to them.

Cohort Comparisons are great for project managers but are often underutilized - they give you detailed insights about event distributions, how users from other cohorts are doing, and if there’s overlap between your specified and comparison cohorts.

Root Cause Analysis is a great feature for data analysts. It acts as multiple analysts working to identify which properties cause a drop in your metrics. You can tell what caused the drop and whether it was from a particular region or data source.

Quick Amplitude Tips

From hundreds of conversations on the forum and in Slack, combined with years of using Amplitude, I've distilled my learnings into 7 must-know lessons.

1. Event Tracking != Product Analytics

To start, I want to make clear that event tracking is not product analytics. This is important to understand because I often see product analytics setups take a pause after event tracking.

Every product professional wants to make data-driven decisions. To do so, they need to understand and assess their company’s product analytics maturity. Event tracking is just one of the components.

Successful analytics requires three main things:

  • the right questions
  • the right data
  • the right tool setup

You need to know the why behind your product instrumentation process and ask better questions. Chalk out the key questions you wish to answer about your product and try to iterate upon them as you progress. While doing that, you also need to be asking yourself:

  • Is your product infrastructure able to capture the right event data?
  • Are you able to augment the right customer account properties to user event data?
  • Can you do a full analysis of what you have in product data?

A bad tool setup will eventually cost your team’s analytics efforts, and you won’t justify the ROI of your product analytics strategy.

The real magic starts when you have your key essential events implemented with the right context. The data is ready to be analyzed, your hypothesis ready to be tested, and decisions ready to be made.

1. Event Tracking != Product Analytics

2. Your insights are only as good as your tracking plan

Amplitude won’t make your product and growth problems disappear. Your ability to derive insights in Amplitude is only as good as the event data schema and instrumentation. You can only get so much from Amplitude if your tracking plan isn’t designed well enough.

One of the most common pitfalls to avoid during the tracking plan phase is tracking everything and discovering nothing. A good start to creating your tracking plan would be considering what you need to understand from the product.

Some pointers that have helped me include keeping the event naming convention as simple as possible for everyone to understand. You also need to capture enough context with your event properties as they supercharge your tracked events.

Lastly, try balancing out the insights-to-noise ratio in any tracking plan. This can be done by removing events, which essentially answer the same question but in different ways.

You want to avoid event bloat. This often occurs when you try to capture all the events, rather than working backwards from a specific need.

3. Data governance is non-negotiable

Anything that keeps your data more organized and the quality consistent, such as tools or processes, comes under data governance.

Data is only as useful as it is accurate and discoverable. Without any established data governance practices, it’s tough to achieve those things.

Teams are always looking to get to their minimum viable product (MVP) of event tracking and create dashboards for stakeholders quickly. It’s tempting to roll out data governance on the fly, but the earlier you implement it, the richer the rewards.

Data governance practice can be classified into three buckets:

  • Instrumentation standards - Instrumentation standards deal with how your taxonomy is defined, management of your different data sources, and how to establish scalable approaches for instrumentation and testing.
  • Continuous education - Educating your internal users on how data should be accessed, the type of data quality you have, your caveats, and your metrics definitions.
  • Ownership - Defines who the data governor should be, how data quality issues should be handled, and basic standard operating procedures.

Data governance is a team effort, so data issues will inevitably arise. By collaborating with the engineering teams early on, you can establish practical guardrails and go-to approaches to take if your infrastructure goes down. When teams are aligned on product analytics goals, it’s easy to keep pace if your product is rapidly changing, so data governance is in line with your engineering goals as well.

Maintaining a tool like Google Sheets can go a long way in documenting and maintaining a taxonomy. It’s simple enough for people to use, and it’s a great indicator of the nature of your event taxonomy. If managing becomes out of control, either your event scale has increased, or your taxonomy has become too complex.

Use your None values tactically. Once your data ingestion and Amplitude implementation is at a stable level, evaluate key user properties and event properties to assess their None cardinalities. This gives a good baseline level of what’s coming in and what’s missing from your event data and is the best time to understand and document caveats.

3. Data governance is non-negotiable

4. Stakeholder trust is key

There’s a chance that some stakeholders won’t trust your Amplitude data.

It’s important to know that data trust is a direct function of your Amplitude implementation. Are you starting from scratch, or are you migrating from an existing source of truth? Not everyone has the luxury of starting from scratch in Amplitude, and those that don’t will run into a second source of truth from where your organization historically collected event data.

Data in Amplitude can often get siloed and labelled as product data. Try aligning your key metric definitions to be as close as possible to your sources of truth. Document different technical definitions of unique users and total sessions from both your source system and Amplitude. There are definitions of how Amplitude calculates unique users and sessions that can help.

Conduct regular cross-validation exercises. This will help compare data from different sources and identify inconsistencies to reconcile different metrics and KPIs and to make sure they align across all resources.

You need to establish guidelines on when to use Amplitude and when to use your primary source of truth.

5. 80% of insights come from 20% of chart types

Amplitude has more than ten chart types and features. Users sometimes try to force insights from different chart types but end up not having a good experience. I recommend classifying charts into those that help for monitoring purposes and those useful for data exploration.

6. Onboarding business users is crucial

It’s as crucial to onboard your first few users in Amplitude as it is to get your first insight.

Product analytics is a team sport. If you don’t pass and share, no one wins. Make sure to pass and share insights across all teams that will use Amplitude.

The best way to learn the platform is to use it. The teams with the highest adoption rate internally have projects with clear and concisely defined data. The more they use it, the more they’ll share their hypothesis and insights.

There are three things I recommend you make use of to improve internal adoption rates:

  • Notebooks - A powerful feature for defining and sharing onboarding guides, FAQs, and storytelling through data.
  • Slack integration - Enabling the chart unfurling for URLs gives users a quick view of the chart from within Slack.
  • Team-specific office hours - Rather than having a meeting where teams ask questions about Amplitude, having a team-specific chat means questions can be answered in a way that’s tailored to them.

7. Insights are meaningless unless they result in action

The dashboard alone doesn’t create value — it’s the actions users take with the dashboard insights. Insights generated via different Amplitude charts can be rendered meaningless unless efforts are made to take action on them - and soon. Data becomes less accurate or relevant over time, so acting on these insights in a timely manner makes all the difference.

Document and classify insights as something interesting to know versus actionable insights. Then, conduct regular review sessions and quantify them along with your analytics team and stakeholders. This will help you justify the ROI of your team’s effort and the cost of the platform.

See you in the community!

Connect with Saish and other practitioners in our community. We focus on in-the-weeds, actionable content. You'll find other Amplitude users sharing best practices, and we have plenty of programs to help you connect with others working on similar things.