Skip to main content

Qualitative research has a sample size problem.

Most research best practices suggest 15-30 participants per segment for reliable patterns. But a solo researcher can conduct six to eight interviews per week before their calendar collapses. A product manager with meetings, standups, and actual product work? Maybe two or three.

This is fine when you need directional signal from a handful of people — a quick pulse on a feature concept, an initial read on why users are churning. It's not fine when you need to understand patterns across a larger group: different segments, different use cases, different levels of experience with your product. The sample sizes that produce reliable qualitative insight are simply out of reach for most teams using traditional methods.

Async interviews solve this by removing the constraint that both people have to be in the same conversation at the same time.

What Async Interviews Are

An async interview is a conversation that doesn't require both participants to be present simultaneously. You send a link. The participant responds whenever they have time — morning, evening, between meetings. An AI interviewer guides the conversation, asks follow-up questions, and adapts based on what the participant says. The result feels like a conversation, not a form.

The reason this matters for scale: when interviews don't require scheduled time slots, you can have twenty or forty running in parallel. You write one brief, share one link, and conversations happen whenever participants are ready. There's no coordination cost that grows with participant count.

For a full walkthrough of how this works step by step — with a real conversation example — see How to Run Customer Interviews Without Scheduling a Single Call. For the broader context of where AI interviews fit alongside other research methods, see the complete guide.

Why Async Sometimes Produces Better Data

This might seem counterintuitive. How can removing the human interviewer improve data quality?

It doesn't always. But in three specific situations, it does:

When honesty matters more than rapport. Participants self-censor in live interviews. They soften criticism, avoid negativity, and tell interviewers what they think they want to hear. This is especially pronounced in exit interviews, employee feedback, and churn conversations — any situation where the participant worries about professional consequences. An AI has no social agenda, no facial expressions to manage, and no relationship to protect. Early evidence suggests participants speak more directly, and you can verify this by comparing the specificity and candor of responses across methods.

When you need consistency across many conversations. A human interviewer asks slightly different questions each time. Their energy shifts throughout the day. They unconsciously probe harder on topics they find interesting and skim past ones they don't. This is fine for a handful of interviews, but it introduces noise when you're comparing responses across twenty or forty participants. In research methods terms, this is an inter-interviewer reliability problem. An AI applies the same approach every time, which makes cross-interview comparison more reliable.

When participants need time to think. Live interviews create what you might call a silence tax — participants feel social pressure to respond quickly rather than think carefully. Some people — especially introverts, non-native speakers, or people discussing complex topics — give better, more considered answers when they can pause, collect their thoughts, and respond without someone watching them. An async format gives them that space naturally. Offering both voice and text captures different modes of expression too, which adds richness to the dataset.

The Math: What Changes When You Go from 8 to 40

The most interesting thing about async interviews isn't any single conversation — it's what happens when you have enough of them to see patterns.

Eight interviews give you anecdotes. They're useful for generating hypotheses, spotting obvious pain points, and hearing your customers' language. But eight interviews can also mislead you. One articulate outlier can dominate your conclusions. You might hear three different reasons for a problem and have no idea which one is most common.

Twenty interviews start to reveal patterns. When the same frustration appears in twelve separate conversations — unprompted, in different words — that's signal you can act on with confidence. Twenty is usually enough to see whether something is a widespread issue or an edge case.

Imagine you're researching why trial users don't convert. Five interviews might surface three different reasons — confusing onboarding, unclear pricing, and missing a key integration. Twenty interviews start to show that confusing onboarding accounts for nearly half the churn, and it's concentrated in users who signed up through a specific marketing channel. That's a pattern you can act on. Five interviews would have told you "some people find onboarding confusing." Twenty interviews tell you exactly where to focus.

Forty or more interviews let you segment. You can split by user type, by company size, by how they found you — and still have enough data points per segment to draw conclusions. At this scale, you're approaching the reliability of quantitative research while retaining the depth of qualitative. You can identify not just what the patterns are but why they exist, because you have dozens of people explaining their reasoning in their own words.

The traditional research tradeoff — depth or scale, pick one — exists because human interviewer time is finite. Async interviews soften that tradeoff by removing the scheduling constraint.

When the Bottleneck Shifts

More interviews means more to analyze. There's no point pretending otherwise.

With eight interviews, you can read every transcript end-to-end and hold the full picture in your head. With forty, that approach breaks down. The work changes.

AI-generated summaries and key points give you a starting index. Instead of reading forty full transcripts sequentially, you scan summaries to spot which conversations are most relevant, identify recurring themes, and then dive into specific transcripts for depth and exact quotes. The workflow shifts from "listen to recordings and take notes" to "read summaries, tag patterns, and pull supporting evidence from transcripts."

This is genuinely a different kind of work. It's synthesis and pattern-matching — which is the researcher's actual skill — rather than transcription and note-taking. The bottleneck moves from conducting interviews to analyzing them. And analysis scales better than scheduling, because you can read a summary in two minutes and decide whether a full transcript warrants your time.

The quality of your briefs matters more at scale, too. A well-designed brief produces transcripts that are easier to compare across participants, because the AI explores the same topics in the same structure. A vague brief produces forty conversations that wander in different directions — more data, but not more insight. For principles on writing effective briefs, see how to design an effective AI interview.

Getting Started

The fastest way to test whether scale changes your insights is to run a batch:

  1. Pick a specific question you've been wanting to answer — something like "why did people who signed up last month stop using the product?" or "what do our best customers wish we'd build next?"

  2. Write a brief with 3-4 focused topics around that question. Keep it tight enough that each conversation takes 8-12 minutes.

  3. Send the link to 15-20 participants. Use whatever channel already works for reaching them — email, Slack, in-app message.

  4. As transcripts come in, scan the summaries and look for patterns. What comes up more than once? What surprised you? Where do different segments diverge?

If the patterns from those fifteen conversations tell you something you couldn't have learned from three or four scheduled calls, the approach works for your context. Scale up from there.

You can set up your first async interview with Guided Surveys — the free tier gives you enough to run your pilot batch and see the results for yourself.