I see it all the time in online forums and hear it from clients. There's a cynical belief that quietly derails projects and stifles innovation. It sounds like this:
"The hardest truth is people will lie to your face about wanting your product to be nice, then never use it."
This sentiment, and others like it, are dangerous because they feel true. They give teams permission to skip the crucial work of customer research, shielding them from the truth and allowing them to operate on blind faith. This fear is often masked by other excuses:
But what if these fears aren't a reason to stop, but a sign that your entire process is broken?
When you use the fear of bad research as an excuse to do no research, you're not saving time or resources. You're making a direct trade-off against growth.
The scale of that trade-off is significant. According to Qualtrics, the global cost of bad customer experiences is estimated at $4.7 trillion in lost spending each year. Each piece of authentic insight you uncover is a chance to avoid contributing to that number.
The risk isn't that you'll get some "fake" feedback. The risk is that your avoidance guarantees you'll build something customers don't truly value.
The truth is, feedback only feels "fake" or "awkward" when the process is flawed. The problems you fear are the direct result of a broken approach.
This is a legitimate concern. There are valid questions about the quality of some paid research panels, with stories of people trying to scam the system to get paid. This is why screener questions and reputable recruitment methods are crucial.
However, most "fake" feedback isn't malicious. It’s the polite, useless answers you get from a flawed conversation. The problem isn't the participant; it's the process you put them through.
This is the expert’s trap. Deep industry knowledge is invaluable, but it also creates blind spots and biases. Without a structured plan for research, teams inevitably design studies that simply confirm what they already believe.
I once saw a company spend three years and a fortune building a fantastic software product, only to realise they'd been marketing it to the completely wrong audience. Their initial research was designed, unintentionally, to confirm their own assumptions about who the customer was. Their expertise blinded them to the real market.
This is perhaps the most honest excuse. Running a good interview is hard. The word "interview" itself puts people on the defensive, reminding them of a stiff, formal job interview where answers are prepared and rehearsed.
A great customer conversation is the opposite. It’s not a rigid Q&A; it's an exploratory process that requires empathy and the ability to think on your feet. It's less like a job interview and more like being a detective.
This is where most teams fail. They make a few common mistakes that guarantee an awkward and fruitless conversation:
This final mistake is the most damaging, and it's what leads directly to the "fake" feedback everyone fears.
To fix the output, you have to fix the input. The solution is a mindset shift: stop acting like a salesperson trying to validate an idea, and start behaving like a detective looking for clues.
A good detective never starts an investigation blind. They begin with a case file. For customer research, that case file is your research plan. Before you talk to anyone, you need to define:
Central to this plan is your guiding question, or hypothesis. This is where many teams go wrong. They form a biased hypothesis and treat the research process as an experiment designed only to validate their existing idea.
A detective's mindset is different. You must start with a balanced, fair hypothesis—a core assumption that needs to be tested. But during the interview, your goal is not just to find evidence that proves you right. It’s to be radically open to any clue:
The most valuable clues aren't found by asking about the future. They're found by digging into the past. Users can tell you in excruciating detail about the hacks and workarounds they use to solve problems right now — the intern spending a full day wrestling with spreadsheets for a report, the fragile system of macros built to connect tools that were never meant to connect. The story of the workaround reveals the depth of the pain far better than any direct question ever could.
This approach, popularised by books like The Mom Test, relies on probing past behaviour rather than future hypotheticals. Here's how to transform common research questions:
Here’s how to transform your questions from bad to great:
| Instead of this (bad question)... | Ask this (great question)... | Why it works | |||||
| "Do you think our new dashboard is a good idea?" | "Tell me about the last time you had to prepare a marketing report. What was that process like?" |
Focuses on a real, past event, not a future opinion.
|
|||||
| "Would you pay $50/month for this feature?" | "What's your current budget for reporting tools? What do you already pay for to solve this?" |
Grounds the conversation in actual behavior and existing budgets.
|
|||||
| "Do you like this design?" | "Walk me through how you currently handle this task. What parts of that process are the most frustrating?" |
Uncovers pain points in their current workflow, which is more valuable than a subjective opinion on a design.
|
The fear of getting it wrong is a powerful deterrent. But the belief that customer feedback is inherently "fake" is a myth — one that allows bias to thrive and businesses to build in the dark.
The space between polite responses and authentic customer truth is where products fail. Closing that gap requires more than a new set of questions. It means shifting from biased interrogator to empathetic detective: one who starts with a clear plan, asks about real experiences, and listens for the stories behind the answers.
Here's how I can help you move from theory to action:
Why do businesses avoid customer research? The most common reasons are fear of hearing negative feedback, the assumption that existing expertise is sufficient, and concerns about the quality of research participants. In most cases, these are symptoms of a flawed process rather than reasons to skip research altogether.
What is the cost of skipping customer research? Qualtrics estimates the global cost of poor customer experiences at $4.7 trillion in lost spending annually. At an individual business level, the cost shows up in failed product launches, high churn, and messaging that doesn't convert — all of which are more expensive to fix after the fact than to prevent through research.
How do you avoid getting fake or useless feedback in customer interviews? Fake or superficial feedback is almost always a process problem, not a participant problem. The most common causes are leading questions, treating the interview like a sales call, and asking about hypothetical future behaviour rather than real past experience. Changing your question structure — from "would you use this?" to "tell me about the last time you dealt with this problem" — produces substantially more useful responses.
What is the difference between validating an idea and genuine customer research? Validation research starts with an answer and looks for confirmation. Genuine customer research starts with an open question and follows the evidence, including evidence that contradicts the starting assumption. Most teams think they're doing the latter when they're actually doing the former.
What is The Mom Test approach to customer interviews? The Mom Test, a framework developed by Rob Fitzpatrick, is a set of principles for running customer interviews that produce honest, useful responses. The central idea is to ask about people's actual past behaviour rather than their opinions about your idea — because people will tell you what they think you want to hear, but they can't lie about what they already do.