Research Methods

Why Companies Ignore User Research—and What It’s Costing Them


Most companies that skip user research don't do it deliberately. They do it because of internal pressures, misaligned incentives, and, often, a fear of hearing something that disrupts a settled narrative.

 

Subscribe to the monthly newsletter for marketing and research strategies and insights ➡️

 

I've worked across marketing, product marketing, and product management. Across every role, one pattern holds: every company claims to care about its customers, but many actively avoid structured conversations with them. The excuses vary. The underlying dynamics tend to be the same.

Here are four of the most common blockers I've seen, and what's actually driving them.

 

Why do companies over-rely on quantitative data instead of talking to customers?

Quantitative data is abundant, accessible, and comfortable. Most marketing and product teams are already living inside dashboards: website analytics, email metrics, heatmaps, product telemetry. When everything is measurable, it's easy to treat measurement as understanding.

The problem is that quantitative data tells you what is happening. It can't tell you why. You can see where users drop off, which features they ignore, and which campaigns fall flat. But without talking to real people, you're making educated guesses at best.

As Nielsen Norman Group's research notes, organisations that rely on A/B testing alone can run a long string of inconclusive tests, because quantitative methods can tell you which variant performed better but not why it did. Qualitative research closes that gap.

The underlying issue: Quantitative data feels safe to present. It's defensible in a meeting. Qualitative insight requires interpretation, and interpretation can be contested. Teams often default to what's least likely to be challenged rather than what's most useful.

How to move it: Position qualitative research as the missing context for data you already have. "We see a 60% drop-off at this step. Five to eight interviews would tell us why." That framing keeps leadership in their comfort zone while opening the door to a more complete picture.

 

Why do leadership teams assume they already know the customer?

This is especially common in founder-led companies. In the early stages, the CEO probably did know the customer better than anyone. Markets shift, customer expectations evolve, and what worked in year one rarely works unchanged in year five. But the confidence built in the early years often outlasts its usefulness.

I've worked at companies where leadership dismissed research because they already knew the customer. In one case, I ran interviews to validate a new feature idea, and every single customer told me they didn't want or need it. Leadership ignored the findings, pushed ahead anyway, and the feature failed.

This isn't unusual. Fear of ego damage as one of the honest reasons organisations avoid research, particularly in founder-led environments where the initial product is treated as a great idea that can't be questioned. As Erika Hall, author of Just Enough Research, puts it:

 

"We don't have time to do research" is usually a synonym for "We can't risk learning something that takes out a load-bearing myth in our organisation."

 

The underlying issue: Research is risky when it might contradict a decision that's already been made, or an identity that's been built around a particular vision. The resistance isn't laziness. It's self-protection.

How to move it: Frame research as diagnostic rather than evaluative. "We expected X, but the results show Y. Let's find out why." You're not questioning the vision; you're troubleshooting a gap. If a leader still won't engage, let customers do the work. Tools like Dovetail allow you to surface real customer quotes directly from interviews. A customer saying "this isn't solving my problem" in their own voice carries more weight than any internal debate.

 

Why do companies talk to the wrong customers?

Some organisations are doing research — just not with the people whose behaviour actually matters. Leadership teams regularly gather insights from other executives, industry leaders, or senior contacts in the sector. This gives them a strong macro perspective. It doesn't tell them much about daily friction at the user level.

I once worked at a company where senior leaders were constantly networking with other executives and gathering industry insights. The macro picture they built was coherent. But features built from those conversations consistently missed actual users, because the people being consulted weren't the ones doing the work.

This is a structural problem, not a laziness problem. When research happens primarily through sales conversations, leadership meetings, or industry events, the insights that reach the product are filtered through multiple layers of interpretation, each with its own incentives and blind spots.

The underlying issue: Access bias. Leaders naturally have access to other leaders. Getting to end users requires deliberate effort and sometimes deliberate permission. Without a structured programme, research defaults to whoever is easiest to reach.

How to move it: Position end-user research as a complement to the strategic view, not a contradiction of it. "We have strong macro insight from leadership conversations. These five interviews would tell us if that translates to how the product lands day-to-day." Most leadership teams that value any research will accept this framing.

 

Why does research still miss the mark even when companies invest in it?

This is the quieter failure mode. The company has done research. They have findings. They ran interviews, maybe commissioned a report. But when the product launched, it didn't land.

I worked with a company that spent significant time developing a new product, backed by extensive external research. When it launched, it struggled to achieve product-market fit. After a while, I was brought in to take a closer look. Through customer interviews and data analysis, it became clear that the issue wasn't the product itself. It was misaligned positioning, messaging, and go-to-market strategy, caused by gaps in what the original research had actually tested.

This is where product research and messaging research diverge. A product can be validated on functionality and still fail because nobody has tested whether buyers understand why it matters, or whether the message reaches the right people at the right moment.

The underlying issue: Research is often scoped to answer the questions the team is already asking, rather than the questions the market is actually posing. Confirmation-oriented research produces findings that confirm what you wanted to hear.

How to move it: Define what decisions the research needs to inform before it starts. Test messaging assumptions alongside product assumptions. And build in at least some research designed to surface what you're not seeing, not just to validate what you already believe.

 

The real reason companies avoid research

The blockers above are real. But beneath most of them is a more fundamental dynamic: research is threatening.

Every product idea, every campaign, every positioning decision rests on assumptions. Until those assumptions are tested, they're intact. A product roadmap that hasn't been challenged by customer conversations is still a roadmap. Research introduces the possibility that it isn't.

For teams under pressure to ship, for founders emotionally invested in their initial vision, for senior leaders who've staked a budget on a particular direction, the prospect of learning something unwelcome is genuinely costly. It's much safer to proceed on instinct and optimise for confidence than to introduce evidence that complicates things.

This is the dynamic that produces organisations with dashboards, surveys, and NPS scores, but no real customer insight. Measurement is comfortable. Understanding is risky.

The companies that do research consistently, and do it well, have typically decided that the risk of not knowing is greater than the discomfort of finding out.

 

Where to start if research is undervalued in your organisation

You don't need a research budget or a dedicated researcher to begin.

Five to eight customer interviews are enough to develop sharper insight into how customers think about a decision, describe a problem, or evaluate a solution. That's a few hours of preparation, a week of scheduling, and a handful of conversations. The output is specific, quotable, and harder to dismiss than another internal hypothesis.

If leadership resists formal research, frame the first round as a quick diagnostic. Pick one decision that's currently being debated and use three to five interviews to answer it. The goal isn't to build a research practice overnight. It's to demonstrate that talking to customers is faster and cheaper than arguing about what they want.

 

Subscribe to the monthly newsletter for marketing and research strategies and insights ➡️

 


FAQ section

Q: Why do most companies skip user research? Most companies skip user research because of time pressure, cost concerns, or a belief that they already understand their customers. Beneath these excuses, the deeper issue is often fear: research creates the risk of learning something that disrupts a settled plan or challenges a leadership narrative. Nielsen Norman Group identifies misaligned incentives as the real driver in larger organisations, where teams are rewarded for shipping fast rather than for building what users actually need.

Q: What does ignoring user research cost a business? More than most teams expect. The cost rarely shows up as a research line item — it shows up as rework, failed launches, and features nobody uses. Studies suggest that up to 95% new products fail at market, with inadequate customer understanding cited as a primary cause. Building the wrong thing, discovering it's wrong, and rebuilding it is almost always more expensive than a few hours of customer interviews before development starts.

Q: What are the most common obstacles to user research in organisations? The four most common blockers are: over-reliance on quantitative data, leadership assuming they already know the customer, talking to the wrong people (executives rather than end users), and research methodology that's too narrow to surface disconfirming evidence. The NN/G research identifies a fifth: the misalignment of incentives, where teams are rewarded for building and shipping rather than for learning.

Q: Is qualitative research worth it if you already have analytics data? Yes. Quantitative data shows what is happening; qualitative research explains why. A 60% drop-off rate in your analytics tells you something is wrong. Customer interviews tell you what it is and whether it's fixable. The two methods answer different questions. Relying on analytics alone means making decisions based on behaviour without understanding the reasons behind it.

Q: How many customer interviews does it actually take to get useful insight? Research consistently shows that five to eight interviews are enough to identify the primary patterns in how customers think and describe a problem. You won't reach statistical significance, but qualitative research isn't designed for that. It's designed to surface motivations, language, and friction points that no dashboard can show you. Three to five interviews can be enough to answer a specific, scoped question.

Q: What is the difference between product research and messaging research? Product research validates whether a solution works for users. Messaging research validates whether buyers understand why it matters to them, how they describe the problem, and whether the positioning connects at the right moment in their decision process. A product can pass product research and still fail at market because the messaging was never tested. Both are necessary; few organisations invest in both.

Q: What's the best way to convince leadership to invest in user research? Frame it as diagnostic rather than evaluative. Instead of "we should do research," try "we expected X, but results show Y; five interviews would tell us why." This positions research as a tool for troubleshooting a specific gap, not a challenge to the overall direction. Letting customers speak for themselves in direct quotes, via tools like Dovetail, is often more persuasive than any internal argument.

Similar posts