Customer interviews are one of the most direct ways a business can understand why buyers make the decisions they do. A well-run interview surfaces the reasoning, hesitation, and language that no analytics dashboard captures. Done badly — with leading questions, vague goals, or the wrong participants — they produce false confidence instead of useful insight.
This guide covers what it actually takes to run customer interviews that generate reliable findings, from preparation through to what you do with the results.
Quantitative data tells you what's happening. It shows you where people drop off, what they click, what they buy. It doesn't tell you why.
Customer interviews fill that gap. A conversation gives you access to the reasoning behind a decision — the thing the customer almost chose instead, the concern they didn't mention to sales, the way they'd actually describe their problem to a colleague. That's the kind of intelligence that shapes messaging, informs positioning, and reduces the risk of building something nobody asked for.
The catch is that a casual chat won't get you there. The quality of what you learn is almost entirely determined by how the interview is designed and run.
The most common mistake is starting too early — booking calls before anyone has agreed on what they're trying to learn.
Before scheduling, you need clear answers to three questions. What decision will this research inform? Who specifically do you need to talk to in order to answer that? And what would a useful outcome actually look like?
Without those answers, you end up with interesting conversations and no clear findings.
It also helps to do some secondary research first — reviewing existing sales call data, support tickets, or competitor reviews — to identify the gaps your interviews need to fill. That shapes your question design and stops you wasting interview time on things you already know.
The right number depends on what you're trying to learn and how homogeneous your audience is. For a defined customer segment, research suggests that meaningful patterns emerge within roughly five to eight conversations — and that beyond ten to twelve, you're more likely to be confirming what you've already found than discovering anything new.
Nielsen Norman Group describes this as saturation: the point at which themes are fleshed out enough that additional interviews won't alter them. It's a more useful benchmark than a fixed number, because it keeps the focus on what you're learning rather than how many boxes you've ticked.
The practical implication: start with five or six, analyse as you go, and keep recruiting only if new themes are still emerging.
Getting the participant selection wrong is one of the fastest ways to end up with findings that don't hold up. Talking to people who are too similar, too senior, or too far removed from the decision you're researching produces a narrow picture.
For most B2B research, that means being precise about who you're targeting. Are you speaking to the person who made the decision, or the person who influenced it? Are they a current customer, a churned one, or someone who evaluated you and chose a competitor? Each group will give you a different kind of insight.
Current customers who've had enough time to experience the product — but not so long that they can't remember the decision — tend to be the most useful starting point. Churned customers are harder to recruit but often more revealing.
Aim for five to eight participants per segment you're researching. If you're covering multiple distinct segments, treat each one separately.
An interview guide is not a script. It's a structure that keeps you focused while leaving room for the conversation to go somewhere useful.
Start with the outcome you're trying to understand, and work backwards. What does someone need to tell you for this interview to be worthwhile? Those are your anchor questions. Everything else is follow-up.
A few things to build in from the start:
Open-ended questions. "Walk me through what happened when..." is more useful than "Did you find it easy?" The first opens a conversation. The second closes one.
Warm-up questions. Starting with something easy — their role, a bit of context about their situation — helps people relax before you get to the harder questions.
Space for follow-up. The most useful moments in an interview often come from an unexpected comment. "You mentioned X — can you say more about that?" is a more valuable skill than any prepared question.
One or two anchor questions you ask everyone create consistency across a project. Beyond those, let the conversation go where it needs to.
The most common failure mode is leading questions — questions that suggest the answer the interviewer is hoping for.
Nielsen Norman Group notes that leading questions result in biased or false answers, with respondents tending to mirror the interviewer's framing rather than share their actual experience. This is especially likely when the interviewer is seen as an authority — which is almost always the case when a founder or senior team member is running the research.
"Don't you think the onboarding is fairly straightforward?" is a leading question. "Walk me through what happened when you first used it" is not.
The other common failure is running interviews to confirm what you already believe rather than to genuinely test it. If you're reading questions from a script, filling silences before the person has finished thinking, or steering the conversation back on track every time it goes somewhere unexpected — you're likely to come home with the findings you went looking for.
Good interview technique takes practice. Restraint is the hardest part.
Notes taken during an interview are rarely enough. The real work happens afterwards.
Immediately after each call, capture what stood out — specific phrases, moments of hesitation, anything that surprised you. Then step away. Coming back to the material with some distance makes it easier to see patterns rather than individual quotes.
Across a set of interviews, you're looking for themes that appear repeatedly. Not the most dramatic quote, but the thing you heard in five out of seven conversations. That's where the reliable insight lives.
Once you have findings, share them. The value of customer research is almost always lost when it stays with one person. Whether that's a formal debrief, a summary document, or a short presentation, the goal is to get the insight into the decisions it should be informing — messaging, positioning, product direction, sales approach.
There are situations where running interviews in-house works well. And there are situations where it doesn't.
The clearest case for outside support is when the team is too close to the product to ask neutral questions. Founders are often the worst people to run their own customer interviews — not because they don't understand the business, but because customers will tell them what they want to hear. The authority dynamic is hard to overcome from the inside.
Other signals: you've done interviews before and didn't get much from them; you're trying to understand why a messaging or positioning change isn't landing; or you need findings that will be taken seriously by investors, a board, or a leadership team that might otherwise question the objectivity of internal research.
A research partner can design and run the process, but the internal team still needs to be close enough to the findings to act on them. The research is only useful if it reaches the decisions it's meant to inform.
Planning to use customer research in your next campaign? Get in touch and we can talk through what that looks like.
CTA 2 Prefer to build the capability in-house? AmpliStory's research workshop is designed for small marketing teams who want to run their own interviews — from question design through to findings you can actually use.
How many customer interviews do you need for B2B research? For a defined customer segment, five to eight interviews will typically reveal the key patterns. Research on qualitative saturation — the point at which additional interviews stop producing new themes — suggests that ten to twelve is usually sufficient for a homogeneous group. Beyond that, you're more likely to confirm existing findings than uncover new ones. If you're researching multiple distinct segments, treat each one separately and aim for five to eight per segment.
What's the difference between a customer interview and a survey? Surveys tell you what a group of people think. Interviews tell you why. A survey can show you that 40% of customers found onboarding confusing — an interview can show you which specific step caused the confusion and what the customer tried instead. Both have a place in B2B research, but interviews are the more useful tool when you're trying to understand decisions, motivations, or the language customers use to describe their problems.
How do you avoid leading questions in customer interviews? Ask open-ended questions that don't contain the answer you're hoping to hear. "Walk me through what happened when you first tried to set it up" is open. "Did you find the setup process difficult?" is leading. The rule is to describe the situation and invite the person to tell you what happened — not to ask them to confirm or deny your assumption. Having someone outside the project review your question guide before you go into the field is one of the most practical ways to catch leading questions before they bias your data.
What's the best format for B2B customer interviews — video, phone, or in person? Video is usually the most practical balance: it allows you to read body language and facial cues, which add context to what someone is saying, without the logistics of in-person meetings. Phone is quicker to arrange and works well for shorter conversations. In-person is valuable when the topic is sensitive or when building the relationship matters — but it requires more time on both sides. The format matters less than the quality of the questions and the interviewer's ability to listen.
What should you do with customer interview findings? The most common failure is gathering findings and not acting on them. After analysing your interviews for themes — not just individual quotes — share them with the people who need to make decisions based on them. That might mean a short debrief, a summary document, or a direct conversation with whoever owns messaging, product direction, or sales approach. Research that stays with one person doesn't change anything.
How do you know when you've interviewed enough people? You've probably reached saturation when you're finishing the other person's sentences in your head — when you can anticipate what someone is going to say before they say it. That's the point at which new interviews are confirming rather than expanding your understanding. Start with five or six, review your findings as you go, and keep recruiting only if new themes are still emerging.