How Amplitude Taught AI to Think Like an Analyst
Our AI doesn’t just find answers, it finds new questions.
Every analyst knows what it’s like to open a dashboard, spot a spike (or drop), and dig in to figure out what happened. Most of us will segment the data a dozen different ways, checking for an explanation related to some user attributes. The next step might be to examine recent experiments or look at new releases. Like any good private investigator, analysts look at things from a zillion different angles and use each new answer to determine which question to ask next. Every move builds on information learned along the way.
Unfortunately, AI tools don’t instinctively operate this way. They are good at running a single query in isolation or a fixed flow to produce an answer. But they inherently treat analysis as a one-off request rather than an ongoing investigation. AI models don’t come trained to conduct iterative deductive reasoning based on business context. They don’t string insights together to turn small discoveries into bigger “aha” moments. AI answers often sound confident, but they can be shallow or incomplete because they don’t solve problems the way humans do.
At Amplitude, we changed that by building something completely new. We wanted to teach our system to think like an analyst, finding insights and independently determining the next question to ask. We built it right into our platform. It’s not an add-on or an expensive bonus feature. It’s a baked-in part of our standard analytics tool.
Understanding the analyst workflow
To build an analysis engine that works like an expert analyst, we realized we needed to translate human intuition into a structured process. Amplitude has more than a decade of experience working with some of the best analytics minds. We have billions of data points about how PMs and analysts analyze data to uncover insights on our platform. We put all that expertise into our platform in a way that no one else can match.
Analysts do not think in formal steps, but their work actually breaks down into a series of organized, sequential decisions. Each step has an input, an output, and a purpose, even if it feels instinctive on the surface.
Take a simple example: an analyst sees a metric move and wants to know why. Before they can determine a cause, they have to:
- Decide which dimensions matter
- Group the data by those dimensions
- Rank those groupings by impact or unusualness
- Inspect the top data points and determine if a sufficient answer can be found
That sequence is not vague at all once you write it down. It is a simple subroutine. You can give it a name, define its inputs/outputs, and call it when you need to figure out what is causing a change.
We quickly realized that almost every analyst workflow can be treated this way. We could break it down, codify it, and make it specific. That gave us the blueprint to design our system.
How Amplitude AI works under the hood
To bring this finding to life in AI, we started a central agent that works as a coordinator. Its job is to break a question into pieces, assign each piece to the right tools, collect the results, and pull everything together into a narrative that makes sense.
Here’s an overview of some of the types of tools our central agent needs to orchestrate:
- Search tools are responsible for finding context. When the agent suspects an experiment might be relevant, it uses a search tool to fetch active experiments for the appropriate time range. When reports or data annotations made by your colleagues might matter, it looks those up too. Analysts rarely rely on isolated numbers. They always ask what else was happening at the time of an event. The system we built needs to do the same.
- Analysis tools do the heavy lifting on the data itself. They run segmentations, detect anomalies, compare time ranges, measure lift/drop across different cohorts, etc. These tools mirror the operations analysts perform again and again, but they operate as subroutines that can be arranged according to the agent’s instructions.
- Synthesis tools take everything that has been gathered and make sense of it. They analyze patterns, annotations, and experiments, and turn that information into hypotheses, explanations, and structured reports. They decide which potential explanations are the strongest and how to present that reasoning to a human in a way that feels trustworthy.
Amplitude has a long history of observing and understanding how real analysts talk and work. This turned into an advantage when it came to understanding the behaviors and patterns we could model. This expertise allowed us to turn typical behaviors into useful tools. We built subroutines that can execute common tasks. All of these actions became a library that our agent could arrange and use.
The resulting system does not just run queries to answer basic questions. Instead, it follows the same process a human analyst would. As a result, the answers it uncovers are as complete as the ones a human analyst would find.
Why chaining queries matters
Chaining is not a nice-to-have. It is the heart of the analysis process.
No analyst expects to be right on their first try. They start with a hypothesis. They test it. They learn something. Maybe that learning undermines the original idea. Maybe it strengthens it. Either way, it uncovers a new twist so the analyst can change what they look at next.
We wanted our system to behave in the same way. After it runs a first pass to analyze data, it shouldn’t stop. It should look at what it found and ask what to do next. If nothing interesting shows up in device type, it should move on to geography. If it notices that a spike appears only in a specific cohort, it might check whether there were experiments or campaigns targeted there. Each tool call feeds into the next, so each new discovery directs the ever-evolving analysis path.
Real-life analysis works this way. The best insights almost never come out of a single query. Instead, analysts have to combine sequential discoveries to piece together what actually happened. Our goal was to build AI that could work in the same way: exploring angles, looking for patterns, and chaining together insights to get to the truth.
Reimagining the way AI finds answers
Building something that operates like an analyst took more than a good model. It meant replicating the entire analysis process itself. We had to break down the way analysts investigate problems into clear steps that a system could follow, and then build tools to support these workflows.
Most AI tools answer one question and stop. At Amplitude, we designed our AI model to treat analysis more like a conversation, where each step informs the next. As a result, every Amplitude user doesn’t just get a quick answer; they get an expert analyst helping them out every step of the way.

Jacob Newman
Senior Product Manager, Amplitude
Jacob is a product manager at Amplitude, focused on the core analytics product. He began his career at startups in the ed-tech and recruiting space, where he learned to build products informed by data. Outside of work, you’ll find him listening to podcasts or getting lost in a sci-fi or fantasy novel.
More from JacobRecommended Reading

Amplitude + OpenAI: Get New Insights in ChatGPT via MCP
Dec 10, 2025
3 min read

Introducing the Next Frontier of Analytics: Automated Insights
Dec 10, 2025
5 min read

Getting Started: Product Analytics Isn’t Just for Analysts
Dec 5, 2025
5 min read

Vibe Check Part 3: When Vibe Marketing Goes Off the Rails
Dec 4, 2025
8 min read

