Top Five User Research Pitfalls and How to Avoid Them

Still running user research the old way? Learn five common pitfalls holding teams back.

Perspectives
November 11, 2025
Carmen DeCouto headshot
Carmen DeCouto
Group Product Marketing Manager
Magnifying glasses

In the last decade, has moved from outside agencies to in-house teams. It’s now (mostly) embedded into product and design organizations and is a critical part of the development process. The goal? To get to the why behind in order to make products better, digital experiences more seamless, and turn casual users into brand loyalists.

While most product teams engage in user research in some way, shape, or form today, most do so in an ad hoc, episodic manner. Our new introduces a practical framework to scale UX efforts and transform workflows, plus best practices for research mastery from seasoned experts.

But on the flip side of every best practice is a pitfall. Let’s explore five user research mistakes and how to avoid them.

1. Believing that testing and research take a long time

Business teams have historically (and erroneously) perceived UX research as a delay in their development processes—but it doesn’t have to be. “I can get meaningful qualitative insights from talking to only five people that could help shift or change things in a drastic way,” says , a UX and Product Strategy Consultant. While large sample sizes are valuable and statistical rigor matters for major decisions, every change doesn’t warrant a full-scale research cycle—and just a few thoughtful conversations can reveal what numbers can’t.

Moreover, modern technology, , makes everything faster. In the past, getting enough contextual insight to really understand your customers involved watching them interact with your product in real time (in person or virtually), and conducting focus groups, surveys, and interviews. Though the insights gleaned from these methods are invaluable, they’re a heavy, time-consuming lift.

But AI products have vastly scaled our ability to gather and synthesize qualitative insights. They can ingest, comb through, and understand patterns in data that would have taken UX researchers and product managers hours, days, or even weeks. Recognizing that research doesn’t have to take a long time, more teams are shifting from episodic to continuous discovery. More lightweight, continuous research loops emphasize fast learning over perfect research and help UX and product teams to treat every launch as an experiment, not a final destination.

“People have to dispel the idea that UX is just a phase. It’s not a box you check. You have to look at it as a continuous loop of ongoing interactions and checkpoints to make sure that you’re actually addressing the real problem.”

—Shawnda Williams, UX and Product Strategy Consultant

2. Having confirmation bias

If you’re asking customers questions to validate an established perspective, you’re missing the point. But it happens all the time. Teams make up their minds on what they want to do, and then look for qualitative data to back up their decision, help prove ROI, or justify engineering time. That’s .

Do you think the new layout is better than the old one? Did you find this workflow easier to follow? How helpful was this feature? You didn’t have any trouble checking out?

Avoid that behavior. Though no one is immune to biases, there are ways to reduce their impact when gathering user feedback and running experiments. Understanding bias and mitigation strategies can help you plan, carry out, and interpret feedback and experiments more reliably.

  • Watch out for leading questions. Avoid hinting at the direction your product team is currently leaning, what you’ve already mocked up, what other customers have said, etc. Ask neutral, open-ended questions that prompt open, honest feedback.
  • Stay open-minded. Asking open-ended questions might lead the responses in an unexpected direction. That’s OK. Let the users keep talking. Sometimes the most insightful discoveries surface at the most unexpected times.
  • Be ready to iterate. The goal is to get real, rich qualitative insights that drive meaningful decisions about what, how, and why you build products and campaigns—even if they don’t validate your hypotheses or mean starting back at square one. Better to iterate and refine now than launch to lackluster results, right?

3. Only relying on prompted feedback

User feedback involves collecting opinions and reviews from customers to better understand their preferences, problems, and complaints. There are generally two main types of user feedback: prompted and organic.

  • Prompted feedback: Focus groups, interviews, , and usability tests generate prompted feedback. You proactively set up the mechanism for customers to tell you about their experience and solicit their perspectives. Though this data is good and invaluable, it’s often skewed or incomplete. Prompted feedback channels can inadvertently create an artificial environment in which users tell you one thing but do something completely different when they’re on their own.
  • Unprompted feedback: Organic user feedback via support tickets, call transcripts, in-app reviews, etc., can give you a much more authentic idea of product use and user frustration and supplement the data you proactively collected.

“We talk about qualitative data as something that UX teams have to go get,” says , Amplitude Head of AI and founder of Kraftful, which was . “But if you have a product that’s already being used, you have a ton of qualitative insights, such as app reviews, support tickets, and sales call recordings. There are lots of product insights in that data that most teams don’t take into account.”

Today’s customers can engage with your brand on and across so many different channels (and talk about your brand on even more channels behind your back.) Every touchpoint on every channel represents a rich qualitative data point, but many product and UX teams overlook that feedback.

AI products can take the messy, unstructured world of user feedback and turn it into clear, actionable insights that teams can use to design and build better products.

4. Thinking user feedback can’t be measured

Surveys, feedback, ratings, and interviews provide quantifiable signals of real-time customer sentiment. The most common examples are key performance indicators (KPIs) like and .

However, most qualitative signals can also be distilled into and reports, and AI can help to summarize and quantify themes from qualitative data. For example, AI tools can analyze thousands of open-text survey responses or app store reviews to surface the top recurring pain points, sentiment trends, and emerging feature requests. Similarly, AI can categorize qualitative feedback into key themes—like usability issues or feature gaps—and correlate them with KPIs such as retention or conversion to quantify business impact.

UX teams need to “really bear-hug this data,” as Shawnda puts it. “They need to be monitoring data the same way you’d expect PMs to monitor data. They should be aware of potential problem signals and be able to meaningfully engage in conversations about the data.” This, she says, gives UX teams a seat at the table.

5. Making reactive fixes

Being an effective user researcher requires you to channel your detective. It’s about resisting reactive fixes and embodying an investigative-first mindset.

“Too often, teams jump straight from feedback to solutions,” says Shawnda. “Someone says ‘this form doesn’t work’ and they immediately start tweaking the UI or adding help text.” True transformation happens, she says, when feedback becomes the starting point for inquiry, not action.

Use all available quantitative and qualitative channels to understand and investigate the root cause of your friction. Each data source is an important piece of the customer understanding puzzle, so continuing to pair user feedback and other qualitative insights with product analytics is critical.

Let’s use Shawnda’s example of a broken form to see how these data points come together to tell a complete story. With a reactive mindset, you might start making arbitrary form changes across web and mobile to try to fix the issue. But with an investigative mindset, you identify and fix the root cause of the issue the first time. And it’s a quick fix, too!

  • User feedback: Support tickets or online comments tell you the form isn’t working.
  • Behavioral analytics: Your product data reveals how many customers abandon the form. Turns out, you have a mobile-specific issue affecting 15% of users.
  • Session Replay: You look at heatmaps and replays specific to mobile users and see exactly how they interact with the form. These insights show you that users are tapping a button that isn’t registering clicks.
  • Outcome: You hypothesize that fixing the button will eliminate this friction and .

Although this example included four steps, AI can accelerate or even automate this process.

Your user research cheat sheet

Summarizing the pitfalls we’ve discussed and how to avoid them:

Do this…

  1. Use AI products to adopt more lightweight, continuous research loops.
  2. Ask unbiased, open-ended questions.
  3. Gather and analyze organic user feedback across channels to inform decisions.
  4. Regularly track and monitor quantifiable user feedback signals.
  5. Adopt an investigative mindset to get to the root cause.

Not that…

  1. Believe that user research takes a long time.
  2. Solicit feedback to just confirm your existing opinion or idea.
  3. Rely solely on prompted user feedback.
  4. Think user feedback can’t be measured.
  5. Make reactive fixes.

Get to the why behind user behavior

User research is fundamental to everything product designers and builders do. Avoiding these pitfalls will help ensure you’re on the right track to uncovering the why behind user behavior. And while doing so used to be time-consuming, siloed, and unscalable, Amplitude can help you make it easier, faster, and more complete.

For more than a decade, Amplitude has helped teams understand exactly what users are doing in their product—what they like, where they’re getting stuck, and what keeps them coming back. In the last several years, Amplitude has invested in helping you understand why they’re doing it, first with the launch of , followed by , and another product coming soon.

Amplitude combines the power of quantitative user behavioral data and qualitative user feedback, all in a single platform. Plus, accelerate your research workflow, transforming what was once a slow, linear process into a scalable, continuous loop of discovery and action.

Discover more best practices to uplevel your approach to user research. .

About the Author
Carmen DeCouto headshot
Carmen DeCouto
Group Product Marketing Manager
Carmen DeCouto is a Product Marketing Manager at Amplitude, passionate about helping digital businesses connect data to growth. Before joining product marketing, she led a growth team focused on monetization lifecycle and startup programs—bridging the gap between user activation, engagement, and revenue.