I frequently get asked the following question:
You talk to lots of teams. What behaviors do you observe on the highest performing data-informed product teams?
I tend to bristle at “high performing” (because product is highly contextual), but I do my best to answer. What behaviors do I observe on healthy, data-informed product teams?
More healthy skepticism about insights and analyses. Team members offer up their work for more scrutiny and improvement. The team celebrates surfacing issues (instead of seeing it as a setback).
In a similar vein, analysis is a team sport. You are much more likely to observe people pairing on insights—either in real-time, or async. They approach analysis in ways that make it easy for someone to “pick up” their work (e.g. notebooks, annotations, etc.) When people leave the company, they leave with much more data literacy than they started, and they owe that to their teammates. You are more likely to see things like weekly data jams, open office hours, and Slack channels for insight feedback.
They don’t throw new hires into the deep end when it comes to understanding the data and the tools. We observe onboarding activities like walking new hires through the taxonomy, what triggers what, and how the tools work. The savviest Amplitude customers run their own data academies and maintain helpful documentation around analytics. When SQL is required—and hopefully it isn’t for everyone—they REALLY teach it.
Instrumentation/telemetry is part of the normal design and development workflow. No “instrumentation tickets”. Not crammed in as an afterthought. No heavy duty dashboard specs handed down from above. Whenever possible, the team that has the most domain awareness does the instrumentation. Overall, there is much less drama and fanfare around instrumentation.
There’s a strong focus on the usability and accessibility of the data. Consistent naming conventions. Human readable/understandable names. Starter projects. Data dictionaries. Frequent audits and cleanups.
Much, much more insight re-use. Less re-inventing the wheel (recreating insights because of low data trust). Branching off of prior work in new and interesting ways. The delta here is HUGE. When data trust is low, the only way someone trusts what they see is if they do it themselves.
The team measures the impact of everything they ship, but uses the right approach for the job. Sometimes gut-checking basic counts or a simple linear regression does the trick. Sometimes A/B testing, multivariate testing (MVT), or multi-armed bandit testing is the way to go. Often, getting on the phone and calling customers is the best option. The key point is that they commit to doing their best, within reason, to understand the impact of their work. This includes rituals like product reviews, learning reviews, and insight jams.
There is a focus on measuring to LEARN vs. measuring to “keep score”, proxy trust, gauge performance, or prove a point. You are much less likely to observe someone, in the presence of disconfirming information, saying “well, we can’t change that metric because everyone agreed to it.” Instead, they iterate on what and how they measure! Classic double-loop learning. One big way this manifests is in the amount of good data-storytelling vs. data-for-pitching, and a focus on continuous improvement in terms of methods/approaches.
They kill features and experiments. Experiments “fail” as often as they succeed. While not everyone’s favorite moment, it also isn’t frowned upon or shunned. A team that succeeds all the time is either lying to themselves, or playing it far too safe.
Analytics experts operate less as a question/answer service, and more as force multipliers. Up-leveling the org. Scaling trusted expertise. The upholders and evangelists for the craft.
They use data as much (or more) for framing the opportunity and promising options, as they do for “setting success metrics” or “measuring outcomes”. In this sense, measurement is not tacked on. We observe more exploratory analysis to form hypotheses, shape strategy, and “think outside the box”. More ad-hoc exploration on the part of designers and developers as they consider how to intervene. More fluid, and less back-loaded.
They have a standard set of artifacts for each team, initiative, and strategic pillar. Every initiative has a dashboard, one or more notebooks, and some documentation on the relevant events. The department has a set of dashboards, and notebooks related to their North Star Metric and Inputs. Same with each experiment. In short, less re-inventing the wheel with each initiative.
A healthy approach to reducing uncertainty and decision making under conditions of uncertainty. Not seeking ultimate certainty. Avoiding certainty theater. Realizing when they SHOULD ignore the data, most notably when they are trying something that involves new behaviors.
Higher levels of confidence when it comes to taking measured risks / experimenting. They experiment safely, not haphazardly.
In many ways, measurement and data becomes part of how they work. It is not a big deal. It is not all that special. It is neither deified or vilified. It just is.
The list, summarized:
- Healthy skepticism
- Analysis as team sport
- Good onboarding
- Instrumentation part of normal workflow
- Focus on data usability
- Insight re-use
- Measure impact of all work (using reasonable methods)
- Measure to learn vs. control/manage
- Kill features/experiments
- Analytics experts as force multipliers
- Use data to frame strategy, shape opportunities
- Standard set of artifacts
- Healthy approach to grappling w/uncertainty
- High levels of confidence when safely experimenting
- Measurement “just is”