For and , AI doesn’t just help you build faster or more easily—it can also help you build what actually matters to your users. At their startup Inari, Frank and Eric built AI that collected, synthesized, and surfaced essential customer feedback insights, breaking down the distance between dev teams and users. Now at Amplitude, they continue to use AI to unite the quantitative, the qualitative, and a bit of the playful, so anyone can build products meaningfully.
Keep reading for Frank and Eric’s takes on their favorite AI trends, favorite fictional AI inspirations, and the favorite feedback they themselves have ever gotten.
We’ve had some new additions to the Amplitude team—and it’s time to throw them in the spotlight! In our Meet the Team series, hear from the leaders who guide Amplitude’s strategic direction, cultivate innovation, and empower us to help customers build better products and experiences.
What first got you excited about using AI to make customer feedback easier?
Frank: I started my career with a few analytics and product roles at Amazon, OpenDoor, and Dapper Labs, and I kept having to build systems to capture quantitative metrics and qualitative feedback and share those findings with my team. At Opendoor, I manually reviewed feedback about home pricing and quality to help calibrate our pricing strategies. At Dapper Labs, I spent hours per day on Twitter and Discord, engaging with collectors, capturing ideas, and gauging sentiment around product launches. It was high-value work but extremely manual.
So when LLMs started blowing up in 2023, I made a note that maybe we could automate this manual process that I used to run.
The idea for doing this with Inari naturally emerged from customers after our initial YC launch. Early prospects, mostly PMs, designers, and CS managers, were all struggling with processing high volumes of unstructured customer data and turning it into something useful. They were stitching together hacky or manual workflows using spreadsheets, ChatGPT, and Productboard, copying, pasting, and tagging everything by hand. As we spoke to more customers about this issue and prototyped, we got more and more excited about the quality of the results.
We fell in love with it even more as AI coding tools became more performant, because eventually you could theoretically build this self-reinforcing loop: use AI to ship a prototype, automatically capture feedback from customers, automatically tag and synthesize that into recommendations for improvements, then send those instructions to a coding agent to fix it. That loop isn’t fully reliable yet, but we’re getting closer and closer.
Eric: In my first ever software job, it was a little baffling and frustrating to rely on a bunch of PMs and customer success reps to hear customer complaints. It was exciting to me as an engineer to be closer to the end user and know that the work I was doing actually mattered to people.
What’s a stand-out piece of customer feedback you’ll always remember?
Eric: The first feedback that will always stick with me is when we had an Inari customer reach out to tell us that our feedback summary emails weren’t getting sent. At the time, we were still looking for that one sticky feature that had users returning or using the tool every day; high-level customer feedback doesn’t change day-to-day too much, so we were trying a bunch of things to see what gets users back to using Inari.
Our customer said that their leadership team actually reviewed our summary emails every morning, so when they didn’t receive it, it disrupted their routine. Although it was stressful knowing something was broken in production (oops), it was the first time I really felt like I was adding daily value to people’s work.
Frank: To be honest, I don’t have a stand-out single piece of feedback that sticks in my memory since we’ve reviewed so many—but I will say it was really memorable for me seeing the themes in our own Inari instance ebb and flow as we collected more feedback and shipped new features.
We connected all our customer touchpoints—user interviews, sales calls, support logs, chats, etc.—directly into Inari. This let us test whether the insights coming out of our pipeline matched the themes we felt we were hearing from customers. As we tuned the system, its accuracy improved dramatically. Month over month, we’d see recurring issues like duplicate insight, insight depth, integration requests, analytics, and report creation fall down the list as we shipped.
It was both powerful and gratifying: we had a real-time pulse on customer sentiment, and we could watch those signals shift as our product evolved.
Give us some of your OWN feedback. What do you think companies are getting right with AI right now? What are the features you find most helpful and enjoyable?
Frank: Oh man, I use so many AI tools—I love geeking out and trying different products. My number one has to be ChatGPT Deep Research because it changed up the AI meta by making long-running tasks more acceptable across the industry. Whenever I’m working through a big research question, a PRD, or a business strategy decision, I’ll run the same prompt across ChatGPT, Perplexity, Claude, and Gemini’s iterations of deep research. Then I compare the outputs, see what each one missed, and assemble the best insights with my own line of thinking. It’s completely changed how I work.
The other tool I lean on heavily is Claude Code, and a newer tool called Conductor which lets you orchestrate multiple Claude code instances in separate git worktree instances so you can work on multiple features at the same time. It’s wild to see multiple long-running agents tackling different tasks, orchestrated in a single place.
I’m also a big fan of well-designed tools like Linear and Dia—they both have great design and a native AI chat experience. I even experimented with summarizing some Amplitude dashboards, and it worked surprisingly well for high-level analysis.
Last callout—I’m in love with Perplexity and Grok’s AI design primitives. Both of those tools have the best “thinking” affordances to users as the AI works on long-running research tasks. I get so much inspiration on design, UI and animations, and clean presentation formats from all of these tools.
Eric: I think the biggest (and most obvious) thing I’ve learned is that the AI power has to be brought to places people are already working. Cursor and Slack-based AI tools are getting the most usage because the AI features are embedded in their workstreams.
Also, something else I think about often is the user interaction piece, which Cursor has done really well. The current philosophy of using AI to really multiply your output isn’t about trusting the AI to get it right immediately; it's more about making the work reviewable and being able to iterate.
What are they getting wrong about AI? Or what would you like to see more or less of?
Eric: I think an early failure case people were getting wrong about AI was setup; obviously, the more context and information LLMs are given, the better they perform. But setup friction and having to manually add data or connect integrations can really ruin the magic of the work AI does. I was always enamored with the idea of Jarvis from Marvel, and now with MCP and tighter integrations, this is becoming a reality very quickly.
Frank: I don’t think people are necessarily doing anything wrong with AI—most of the energy and investment today feels like it’s going towards pragmatic, productivity-focused use cases, and that makes sense given the costs and expected returns on investment. I personally wish there were more playful, experimental AI consumer apps being built.
If I could dream up my ideal AI product, it’s actually inspired by a video game I loved as a kid: Mega Man Battle Network. In that game, you had a personal AI companion who was both your best friend and your digital sidekick. They had a bunch of digital skills, were full of unique personalities and behaviors, and explored or battled others in the digital world for you.
That’s the kind of direction I’d love to see more of: adding personality and delight into AI. Not just tools that make work faster, but sidekicks that make life more whimsical and joyful.
What sealed the deal for you about joining up with Amplitude? What are you looking forward to here?
Frank: When we started Inari, our mission was to help teams build better products by using LLMs to understand customer feedback. As we iterated on Inari, it became clear very quickly that feedback was just one piece of a huge customer journey.
There were so many types of analysis and actions we wanted to experiment with, but not having access to sources like product analytics data, session replays, or in-app surveys really limited the scope of what we could try. So we were excited when Amplitude reached out since they had basically everything we felt we were missing.
Eric: Two things for me.
One, reading some of the early articles about and was fun and made me excited about the engineering at the team. I think the engineering blogs are always a good way to gauge engineering power.
Two, Amplitude is in a very advantageous position for the new wave of AI and LLMs, given its strong data collection product and experience. As coding and engineering become strengthened by AI, the obvious next step where AI will enhance workstreams will be decision making, and having worked on qualitative data synthesis at Inari, I am super excited to see how we can combine that with the quantitative data Amplitude has experience in.
Did Frank and Eric’s feedback strike a chord? Check out to start collecting your own qualitative and quantitative insights. And if you’d like to work alongside some of our new leaders, visit our to see how you can make an impact at Amplitude.