Stop avoiding customer research: It's costing your business
Customer research avoidance is usually a process problem, not a people problem. Here's what it costs — and how to fix the approach.
Most customer research doesn't fail because the questions were wrong or the sample was too small. It fails after the fieldwork is done — when findings land in a deck, get presented once, and quietly stop influencing anything.
Making research actionable isn't a delivery problem. It's a design problem. The conditions that determine whether findings get used need to be built into the research before it starts.
The most common version of this looks like: a team runs a solid piece of research, produces findings that are genuinely useful, presents them to stakeholders — and then watches as the decisions that follow ignore most of what was learned.
It's rarely because the research was bad. It's usually because no one agreed in advance on what would change based on what was found. Research that isn't connected to a specific decision before it starts tends to produce findings that feel interesting but don't create any obligation to act.
The other failure mode is timing. Findings that arrive after a decision has already been made, or after the budget has been allocated, don't change anything — regardless of their quality. The moment to act has already passed.
Both problems are solvable, but only if you address them at the planning stage, not the reporting stage.
The three questions that determine whether research will be used are rarely asked at the start. They should be.
What specific decision will this research inform? Not "we want to understand our customers better" — that's a direction, not a decision. A useful research objective sounds like: "We're deciding whether to reposition this offer for a different audience, and we need to understand whether that audience has the problem we think they have."
How will we know if the research has done its job? Define what a useful outcome looks like before you go into the field. If you can't describe what you'd do differently based on what you find, the research question probably needs sharpening.
Who needs to be involved for the findings to be acted on? The people who will have to change something based on the research — the person who owns the messaging, the product decision, the campaign — need to be part of shaping the questions. Shared ownership of the research design creates shared accountability for the findings.
Getting agreement on these three things before a single interview is scheduled is the most reliable way to ensure the research doesn't end up in a drawer.
The method follows the question, not the other way round.
Generative research is exploratory. It's appropriate when you don't yet know enough to form a hypothesis — when the goal is to understand a problem space, identify what's actually frustrating customers, or surface the language they use to describe their situation. The output is typically directional: patterns, themes, and a clearer picture of where the real problem lies.
Evaluative research is specific. It's appropriate when you have a hypothesis or a proposed solution and want to test whether it holds up. The output is a verdict: does this messaging land, does this feature solve the right problem, does this onboarding process make sense to someone who's never seen it before.
Knowing which type you're running matters because it changes what a useful output looks like — and therefore what counts as success.
The grand presentation model — research runs, findings are compiled, a polished deck is shared with leadership — is the format most likely to produce a one-off conversation and no lasting change.
The alternative is to make the process collaborative from the start. Stakeholders who've been involved in shaping the research questions are far more likely to engage with the findings, because the findings are answers to questions they personally asked.
Practically, that means sharing early and informally. A striking quote from an interview, shared in a team channel the day it was collected, lands differently than the same quote buried in a report two weeks later. Raw material that reaches people while a decision is still live is more useful than polished material that arrives after it's been made.
A shared repository — somewhere findings are stored and accessible, not locked in one person's folder — also helps. Tools like Dovetail are built for this, but a well-maintained shared document works for smaller teams.
It's also worth considering more interactive formats for sharing findings. Claude Artifacts, for example, can turn a set of research findings into a navigable tool — with tabs for themes, quotes, an empathy map, business implications — that stakeholders can explore themselves rather than read passively. The Plant Parent Paradox is an example of what that can look like in practice. The goal, whatever the format, is to make customer insight something the organisation can draw on continuously — not something that requires a formal project to access.
An insight is not the same as a finding. A finding is an observation: users drop off during onboarding. An insight is the explanation plus the implication: users drop off during onboarding because the setup process asks for information they don't have yet, which means the first session needs to be redesigned around what someone can actually do on day one.
The test of whether something is an insight rather than a finding: can the person reading it tell what they should do differently? If yes, it's an insight. If it's still just a description of what happened, it's a finding — and it needs another step before it's useful.
Useful research outputs are specific about who needs to act, what they need to do, and why it matters. A recommendation addressed to "the team" is easier to ignore than one addressed to the person who owns the relevant decision.
It usually does. A well-run set of interviews will surface more opportunities than any team has the capacity to address immediately, which is why prioritisation is part of the delivery, not an afterthought.
A simple framework: assess each finding against likely impact and the effort required to act on it. The highest-value starting points are findings where the impact is meaningful and the action is relatively contained — a messaging change, a question removed from a form, a section rewritten on a landing page. These build momentum and demonstrate that the research is producing results, which makes it easier to resource the larger changes.
The ones to defer, not ignore, are findings that are important but require significant organisational change to act on. They belong in the research record and should inform longer-term planning — but trying to act on everything at once usually means acting on nothing well.
Ready to put this into practice? Download my free Research Toolkit, which includes a checklist for applying insights.
Need a strategic partner to help you turn customer insights into a clear growth plan? Get in touch to book a 20-minute intro call.
Why does customer research so often fail to change anything? Usually because no one agreed in advance on what would change based on the findings. Research that isn't connected to a specific decision before it starts produces findings that feel interesting without creating any obligation to act. The other common cause is timing — findings that arrive after a decision has already been made don't influence it, regardless of their quality.
What's the difference between a finding and an insight? A finding is an observation: users drop off at a particular point in the onboarding process. An insight is the explanation plus the implication: users drop off because the setup asks for information they don't have on day one, which means the first session needs to be redesigned. An insight tells the reader what to do differently. A finding just describes what happened.
How do you get stakeholders to actually engage with research findings? Involve them before the research starts, not after. Stakeholders who've helped shape the research questions are far more likely to engage with the answers. Sharing findings early and informally — a striking quote while an interview is still fresh, rather than a polished report two weeks later — also keeps research connected to live decisions rather than arriving after the fact.
What should a research plan include before fieldwork begins? At minimum: the specific decision the research will inform, what a useful outcome looks like, who needs to act on the findings, and how success will be measured. Without those four things agreed in advance, it's difficult to design research that produces actionable output — and easier for findings to be set aside once they're delivered.
How do you decide what to act on when research surfaces more than you can address at once? Assess each finding against likely impact and the effort required to act on it. Start with findings where the impact is meaningful and the action is contained — a messaging change, a question removed, a section rewritten. These build momentum. Defer, rather than discard, findings that require significant change — they belong in the research record and should inform longer-term planning.
What's the best way to store and share research findings across a team? A shared, accessible repository that anyone can draw on — not a folder owned by one person. Tools like Dovetail are built for this, but a well-maintained shared document works for smaller teams. The goal is to make customer insight something the organisation can access continuously, not something that requires a formal research project to surface.
Customer research avoidance is usually a process problem, not a people problem. Here's what it costs — and how to fix the approach.
95% of new products fail. The most common cause isn't the product — it's messaging built for the wrong buyer at the wrong stage.
Two rebrands. One built on internal opinion, one built on customer research. Here's what the difference looked like in practice.