What Most Companies Get Wrong About Self-Service Analytics

Your analytics set-up might allow you to run queries on your own. But if it’s too rigid, limited by data quality, or fails to scale data literacy, it’s not exactly self-service.

Perspectives
August 15, 2021
Image of John Cutler
John Cutler
Former Product Evangelist, Amplitude
Self Service Analytics

Gartner defines self-service analytics as follows:

Self-Service Analytics is a form of business intelligence (BI) in which line-of-business professionals are enabled and encouraged to perform queries and generate reports on their own, with nominal IT support. Self-service analytics is often characterized by simple-to-use BI tools with basic analytic capabilities and an underlying data model that has been simplified or scaled down for ease of understanding and straightforward data access.

With all due respect to our friends at Gartner, we think this misses the point. It is time to redefine self-service analytics. Why? Because products are built by empowered, cross-functional teams. The IT vs Business dichotomy is far less relevant in the world of product. When we frame the goal of self-service analytics as convenient, individual access to reports, insights, and data, we lose sight of the big picture. Especially if we limit that definition to “basic analytic capabilities.”

Self-service is not the goal. Impact is the goal.

Let’s explore some of the issues here with three examples of how self-service analytics might not live up to its expectations:

  1. Example #1: An analytics team partners with a data engineering team to create a dashboard for a decision maker. The dashboard is “self-service,” and answers the exact questions finalized during the discovery process. It’s perfect. Until it isn’t. What about new questions? What about exploring the data?
  2. Example #2: A team decides to use an “inch deep, mile wide” approach. They track everything, but at a very low fidelity (e.g. they don’t use event properties or enforce a taxonomy). In theory, now everyone has access to the data using a self-service tool. Just log in! But at what cost to decision quality?
  3. Example #3: A team of 40 product managers are given access to a self-service tool. They go to town. Dashboards pop up everywhere. The tool is powerful and customizable, but the team currently isn’t very data literate. Yes, this is self-service, but it isn’t leading to greater impact.

Example #1 is too rigid. Example #2 is limited by data quality. Example #3 fails to scale data literacy. In the fast-paced and unpredictable world of product and growth, we release new experiments and updates continuously. Our chosen approach needs to keep up without becoming cluttered and unusable. In short, you need sustained adaptability, solid data, and data literacy.

“Analytics, unlike reporting, is meant to be interactive, which requires flexibility and high quality data,” notes product management consultant Saeed Khan. “That flexibility is sorely lacking in many internally built and externally purchased solutions. You’re forced to converge on questions, instead of leaving the door open to new areas of curiosity. Or be flexible and surface-level. This just doesn’t work for product teams.”

But the problems go a little deeper. Building product is a team endeavor. What does self-service look like for a whole team instead of a lone business user? The team needs the freedom to instrument and track new events without worrying about burdening another team. They also need the freedom to access the insights, do exploratory analysis, manage data, collaborate on insights, plan experiments, and take action.

Even this individual team view is a little restrictive. Consider that most organizations do have a “data team,” albeit a data team that is overworked and spread thin. The dominant model is transactional—teams submit questions and requests, the data team does its best to answer/deflect. As a response to being overworked, they create “self-service” options to get some breathing room. But this is also a poor sacrifice.

“Self-service as a curiosity-spark, question finder, and idea-generator is a very real, valuable thing,” explains Grant Winship, Analytics Engineer at dbt Labs. “But I think it must be paired with a product mindset on the data team. PMs and business users must be ready for their questions and analytics forays to be synthesized, interrogated, and selectively developed.” Collaboration and education matters.

For Elena Dyachkova, Senior Product Analytics Manager at Peloton, the education component is the true “elephant in the room.” A former sports journalist, she likens scaling data literacy to educating athletes on the general principles of strength training and “finding unique adaptations for each athlete’s anatomy.” The appropriate use of data, frameworks for setting KPIs, and the business context behind certain metrics, are “not really in scope for a single SaaS tool but are the foundation for all other data literacy pieces working.” While we see analysts using Amplitude to help in these education efforts, it still boils down to a concerted, focused effort.

To us at Amplitude, this more team-focused, collaborative, and impact-focused approach is self-service. Not just access to a dashboard. The idea of accomplishing things “without the help of engineers or data people” is pervasive, but can reinforce silos and impact outcomes. Self-service should enable collaboration, not hinder it.

At Amplitude we try to take a more holistic view of self-service analytics. Our product philosophy is to:

  1. Encourage curiosity. First-pass questions are rarely the best questions. Exploration inspires new questions. We help unlock the long tail of insights, where the true gems exist.
  2. Self-service is about agency, removing dependencies, and impact. It is not about pushing the burden around so that some team members can enjoy easy access, while other team members do the grunt work.
  3. Scale data literacy. Can an analyst and product manager pair and learn together? Can someone start an analysis, make a notebook, and then share that with someone more skilled to get feedback?
  4. A bias to action and impact. By supporting testing, recommendations, personalization, and feature flagging, we get teams close to the action. At the end of the day, the goal is to build a system that creates value for customers, business, etc. It isn’t to run reports.

We believe that self-service fits into a virtuous cycle. Access and usable data inspires curiosity, and curiosity coupled with collaboration and data literacy yields better decisions, actions, and impact.


Product Analytics for Dummies
About the Author
Image of John Cutler
John Cutler
Former Product Evangelist, Amplitude
John Cutler is a former product evangelist and coach at Amplitude. Follow him on Twitter: @johncutlefish

More Perspectives