Amplitude AI Builders: Paul Hultgren Chats about AI Assistant
Building AI-powered chat that helps companies understand their customers
This post is part of our Amplitude AI Builder series. Each one will feature an Amplitude engineer discussing an AI product that they are building.
I’m always amazed at the way Amplitude’s builders find new ways to use data. When I heard that we were building an agent that could chat with customers, it seemed obvious. Of course Amplitude’s behavioral data is important context that a chat tool can use to personalize messages.
As I talked to the people who built that chat tool, I learned there’s more to it than that. It can also create data: it’s a direct source of user feedback that product and support teams can use to make improvements. It can redefine what a successful conversation looks like. This is bigger than just a chatbot, it’s a way to turn existing data into better customer experiences and smarter products.
To dive into the newest Amplitude product, I talked to engineering manager Paul Hultgren about how his team built AI Assistant, a customer chat and support agent that uses product data to detect and solve user problems.
Tell me what AI Assistant is.
At its simplest, it’s a chatbot builder that customers embed in their site. It’s a no-code builder that lets teams customize their AI chat however they want. They can change the behavior, they can bring in their content, they can customize the look and feel, whatever they want.
If you really like using Amplitude Global Agent, this lets you build something like that without a whole team of amazing engineers. It's a click-based way to put that experience into your own product.
What makes AI Assistant different from other customer-facing chatbots?
What we see with our customers is a lot of experimentation with chatbots. Some companies want them to be super detailed, whereas some might want them to be very concise. Some might want it to be very helpful, others want to avoid overstepping specific boundaries. There are a lot of factors to customize and all of that is done via prompting. They see a blank text box and they type in how they want their chatbot to work. That’s intimidating. It’s also hard for teams to actually get what they want.
With AI Assistant, we replaced that blank box with some structure. There are options for things like tone of voice and answer length. You can even add context so your chatbot will understand acronyms that are common to a product or industry. It’s always editable, so you can run tests and iterate creatively. If you have a change in branding or you introduce new products or new acronyms, you can always update it.
How did AI Assistant go from an idea into a live product? Tell me the story.
This one has a pretty long history. It started at Command Bar before we got acquired by Amplitude. The first roots of this product were in early 2023. It started as this widget where you could bring in your documentation and search through it. We saw AI taking off and we wanted to make something that would use AI to search through docs and give people answers, but give them a good experience in the product.
This is something I would say is definitely a theme of the engineering culture at Command Bar and Amplitude. We know our customers already have this information in their documents. Is there some way we can be crafty and repurpose that for something more?
At first, it started as a way to ask a doc a question. The universe of text that the AI would analyze was just that one document. You could just ask a question and get a summary. Then it grew to search across multiple documents. At the start, I don’t think anyone thought this was going to be its own standalone product, but as new tools showed up, we kept optimizing and it snowballed until it could have a full-on back-and-forth chat about information from anywhere.
How did Amplitude’s acquisition of Command Bar change the trajectory of AI Assistant?
We saw how powerful it could be when we plugged in data from Amplitude. Our customers could use AI Assistant as a mechanism for capturing data directly from their customers. It didn’t change the trajectory as much as make us double down.
Teams used to have to trawl through thousands of Session Replays to see where their customers were struggling. Now they can just put an AI Assistant in the product and their customers will just tell them what's wrong. They'll type it straight into the text box. It's a really direct way to get feedback. It works in the other direction too. Teams can use the Amplitude data to personalize their assistants. It makes a lot of sense to put all this data together.
What’s the scope of AI Assistant? How wide can it go?
I would say it goes as wide as you’d like to configure it. At the very basic level, everyone can connect their documentation. As long as you regularly update your documentation, AI Assistant will always be able to pull the new information when their customers need it.
There’s another level too, where it actually does things on behalf of the user. This is extremely useful to support teams. For example, it can walk users through the steps they need to take to perform common actions. It can execute these mechanical, on-rails flows that the support team has to handle thousands of times every day.
For example, if someone asks about getting a refund, AI Assistant can kick off a flow that automates the refund workflow. It could ask for a customer’s order number, fetch the status of their order via an API, confirm their eligibility, and tell them where to click to request their refund. It can handle all those steps in one conversation with no waiting or transfers. If someone wants to escalate it and talk to a human, they can still do that. But for the simple cases, AI Assistant can just help the user handle it on their own in minutes.
What was the hardest part of building AI Assistant?
The biggest engineering challenge is that there’s no one correct solution. Every team has different customers and needs different things out of their chatbot. One wants long, thorough answers. Another wants short, precise ones. For some teams, a retrieval model works better, but for others it’s worse.
The whole product has to have a high degree of modularity, where we can swap things in and out for different customers. Everyone wants something different. There’s no blanket right answer for every company. There’s no one clear objective goal. So we built AI Assistant in a way that makes it look and work in different ways for different customers.
What’s an early takeaway from how customers are using AI Assistant?
Right away, I’m really happy to see the time and effort that they are investing in answer quality. It’s not going to be perfect every time, and teams are building this continual improvement loop to look at all the answers and go through them to do some form of ranking. It’s pretty cool to see companies really looking at data to find out what a good answer looks like.
There are a bunch of levers to pull to improve AI Assistant, but I would say the best one is actually updating the source content itself. That’s usually the biggest bottleneck. Companies can analyze the answers to find information that doesn’t exist in the docs. It ends up sorting information in both directions: it helps customers find exactly the information they need, and it helps companies find exactly the information that customers don’t have.
What’s next for AI Assistant?
I think one of our big goals is to try to productize this loop as much as possible. That means adding some ability to triage these questions as they’re coming in: tagging them, ranking them, scoring them, etc.
I also want to make a stronger link between AI Assistant and documentation. Because we’re plugged into the places that the documentation lives, I want to help make the process of updating documentation a nicer flow.
It’s really important to us to track resolution rate. Most AI assistants have some very simple metrics. It basically gives each conversation a good or bad rating based on a user score. For example, if I ask AI Assistant how to send an invitation to a teammate, and it sends me information that doesn’t help, I might just close the conversation right away. I don’t give a thumbs down, I just move on. So the system scores the chat as successful.
We want to rethink resolution so the system would look at in-product activity and only score the conversation positively if I actually invite a teammate. It’s not about user rating, it’s about whether I actually did what I wanted to do. Amplitude’s product data gives AI Assistant the ability to measure new things. I think it’s going to change a lot about how teams think about chat success.
Ready to try out a better kind of chatbot? Learn more about Amplitude AI Assistant.

Adam Bonefeste
Senior Manager, Content Marketing, Amplitude
Adam is a senior content marketing manager at Amplitude. He writes about how data teams can use technology to answer questions about their customers and their products.
More from AdamRecommended Reading

Dashboard Dread to AI-Driven Decisions: How Tira Rebuilt Its Analytics Workflow
Apr 23, 2026
5 min read

Meet Amplitude AI Assistant: The Support Agent Your Product Deserves
Apr 21, 2026
10 min read

How Cisco Systems Accelerated Adoption by 20% Through Data Innovation
Apr 15, 2026
5 min read

What 27,000 AI Sessions Taught Us About How People Use Agents
Apr 15, 2026
12 min read

