GUIDE
Build a User Research Synthesizer with Claude Code
You have 30 user interviews and no time to synthesize them. Claude Code reads all your transcripts, extracts themes and pain points with direct quotes, and outputs a structured findings document in minutes.
You ran 30 user interviews. Maybe 50. You've got transcripts scattered across Google Docs, Notion pages, and a few audio files you still haven't listened to. Each one holds real insight. And all of it is sitting there, unread, because synthesis takes forever.
I've trained over 100 people on Claude Code, and PMs tell me the same thing: the research itself isn't the bottleneck. It's the synthesis. You do the hard work of talking to customers, then the findings rot in a folder because nobody has time to read 30 transcripts and pull out patterns.
Claude Code fixes this. You feed it your raw transcripts, tell it what you're looking for, and it spits out a structured findings document with themes, ranked pain points, and direct quotes from your participants. The whole process takes about 20 minutes instead of two weeks.
Why Claude Code, Not ChatGPT
You could paste a transcript into ChatGPT and ask for themes. But you'd need to do it one transcript at a time, manually copy results, and then somehow merge 30 separate outputs into a coherent document. That's busywork, not synthesis.
Claude Code works inside your file system. Point it at a folder of 30 transcripts and it reads them all, cross-references patterns, and outputs a single structured document. It's operating on your entire dataset at once, not one snippet at a time.
Setting Up Your Research Project
Structure matters. Claude Code works best when your inputs are organized in a predictable way. Here's the folder structure I recommend:
research-synthesis/
├── CLAUDE.md
├── transcripts/
│ ├── participant-01-sarah-pm.md
│ ├── participant-02-james-eng.md
│ ├── participant-03-lisa-ops.md
│ └── ... (all transcripts)
├── survey-responses/
│ └── nps-survey-q1-2026.csv
└── output/
└── (generated findings go here)Name your transcript files with participant details. participant-01-sarah-pm.md tells Claude the participant's name and role without you having to explain it. Small details like this compound across 30 files.
Formatting Transcripts for Best Results
Raw audio transcripts from tools like Otter.ai or Grain are messy. Claude can handle messy, but clean input gets you sharper output. Here's the format I tell my students to use:
# Participant: Sarah Chen
# Role: Senior PM at FinTech startup (Series B, 80 employees)
# Date: 2026-02-15
# Research Question: How do PMs handle feature prioritization?
---
**Interviewer:** Walk me through how you decide what to build next quarter.
**Sarah:** Honestly, it's a mess. We have a spreadsheet that the CEO
started two years ago. Everyone throws ideas in there. Then we argue
about it for a week during planning. There's no real framework.
**Interviewer:** What happens when leadership disagrees with the team's
priorities?
**Sarah:** Leadership always wins. Last quarter we scrapped three
features the team had already scoped because the CEO saw a competitor
launch something similar. The team was demoralized for weeks.The metadata header at the top is crucial. It gives Claude participant context without burying it in the conversation. The speaker labels (Interviewer vs. participant name) let Claude distinguish your questions from their answers.
The CLAUDE.md That Drives the Synthesis
Your CLAUDE.md file tells Claude what your research is about, what you're looking for, and how you want the output structured. This is the briefing doc that turns Claude from a generic summarizer into a research analyst who knows your project.
# User Research Synthesizer
## Research Context
- Product: Acme PM Tool (project management for mid-market teams)
- Research goal: Understand how PMs prioritize features and where
the current process breaks down
- Total participants: 30
- Participant mix: 18 PMs, 7 engineering leads, 5 C-suite
## Synthesis Instructions
When analyzing transcripts, look for:
1. Recurring themes (mentioned by 3+ participants)
2. Pain points (ranked by frequency and intensity)
3. Workarounds participants have built
4. Direct quotes that capture the theme vividly
5. Contradictions between participant groups
6. Unmet needs participants expressed or implied
## Output Format
Structure the findings document as:
1. Executive summary (3-5 bullet points, biggest findings)
2. Theme analysis (each theme with supporting quotes from 3+ participants)
3. Pain point ranking (table: pain point, frequency, intensity, quotes)
4. Opportunity areas (where pain is high and no good solution exists)
5. Segment differences (how PM vs eng vs C-suite perspectives differ)
6. Raw quote bank (organized by theme for easy reference)
## Tone
- Write for a product team that needs to make decisions
- Be direct about what the data shows
- Flag where evidence is strong vs. thin
- Don't soften negative findingsThe Prompt That Runs the Synthesis
With your project set up, open Claude Code in the research-synthesis/ directory. Here's the prompt:
Read all transcript files in transcripts/.
Follow the synthesis instructions in CLAUDE.md.
Analyze every transcript for recurring themes, pain points,
workarounds, and opportunity areas. Cross-reference across
all participants.
Generate a complete research findings document and save it
to output/research-findings.md.
Include direct quotes with participant attribution for every
theme and pain point. Rank pain points by how many participants
mentioned them and how strongly they expressed frustration.Claude reads all 30 transcripts, identifies patterns across participants, and produces a structured document. First draft lands in about 2 minutes. A manual synthesis of the same data would take a PM 30+ hours.
What the Output Looks Like
Here's a condensed example of what Claude generates:
# Research Findings: Feature Prioritization Pain Points
## Executive Summary
- 24 of 30 participants described their prioritization process as
"broken" or "ad hoc." No one reported confidence in their system.
- The #1 pain point is HiPPO overrides: leadership reversing
team-driven priorities, cited by 21 participants.
- PMs and engineers have fundamentally different views on what
"good prioritization" means. PMs want customer signal.
Engineers want technical debt reduction.
- 17 participants have built spreadsheet-based workarounds.
None are satisfied with them.
- Biggest opportunity: a framework that makes trade-offs visible
to leadership before decisions are made.
## Theme 1: HiPPO Overrides (21/30 participants)
Leadership regularly overrides team-driven priorities, eroding
trust in the planning process.
> "Leadership always wins. Last quarter we scrapped three features
> the team had already scoped because the CEO saw a competitor
> launch something similar." — Sarah Chen, Senior PM
> "I stopped putting effort into prioritization because I know
> it'll get rewritten by the exec team anyway." — Marcus Rivera,
> PM Lead
> "The CEO has great instincts, but he doesn't see the cost of
> context-switching. We lose two weeks every time priorities
> shift." — David Park, Engineering Lead
## Pain Point Ranking
| Pain Point | Frequency | Intensity | Key Quote |
|------------------------|-----------|-----------|------------------------------|
| HiPPO overrides | 21/30 | High | "I stopped putting effort..."|
| No shared framework | 19/30 | High | "Everyone has their own..." |
| Stale prioritization | 15/30 | Medium | "By week 3, the plan is..." |
| Data access gaps | 12/30 | Medium | "I can't get usage data..." |
...That's a findings doc your leadership team can act on. Themes backed by real quotes. Pain points ranked by data, not opinion. Opportunity areas grounded in what participants actually said.
How to Iterate on the Findings
The first draft is a starting point. Here's where Claude Code's iterative workflow pulls ahead of any other tool. Because it's working inside your project with all your files, you just keep refining:
- "The executive summary is too long. Compress it to 3 bullets, each under 20 words."
- "I need a separate section comparing PM perspectives vs. engineering perspectives. Show where they agree and disagree."
- "Pull out every quote about spreadsheet workarounds. I want to build a feature around replacing them."
- "Which participants mentioned competitor tools by name? List them with context."
- "Add a 'Surprising Findings' section for things only 1-2 participants mentioned but that seem significant."
Each follow-up takes 30-60 seconds. You're interrogating your own research data in real time. Questions that would require re-reading 30 transcripts get answered instantly.
Working With Survey Data Too
Interviews aren't the only input. If you've got NPS survey responses, support tickets, or free-text feedback in a CSV, drop them into the project and tell Claude to incorporate them:
Read the NPS survey responses in survey-responses/nps-survey-q1-2026.csv.
Cross-reference the open-text responses with the themes from
the interview synthesis. Do the survey responses confirm or
contradict the interview findings? Update the findings document
with a new section called "Survey Triangulation."Now your findings are triangulated across qualitative interviews and quantitative survey data. That's the kind of rigor that gets research taken seriously by executives.
What PMs Get Wrong
The biggest mistake: dumping raw, unformatted transcripts and expecting magic. Garbage in, garbage out. Spend 5 minutes per transcript adding the metadata header and cleaning up speaker labels. That small investment dramatically improves output quality.
The second mistake: asking for synthesis without specifying what you're looking for. "Summarize these interviews" gives you a generic summary. "Identify recurring pain points in feature prioritization, rank them by frequency, and include direct quotes" gives you something you can present to your VP. Specificity in your CLAUDE.md and prompts is everything. The best practices guide covers this principle in depth.
From Synthesis to Action
A findings doc is only valuable if it changes decisions. Once Claude generates your synthesis, you can take it further:
- Generate a stakeholder presentation: "Turn the top 5 findings into a slide outline with one key quote per finding."
- Draft opportunity briefs: "For each opportunity area, write a one-page brief with the problem, evidence, and a proposed solution direction."
- Build a quote bank: "Organize every usable quote by theme in a format I can paste into PRDs and presentations."
Your research goes from sitting in a folder to driving product decisions in an afternoon. That's the real payoff. Learn how to take those opportunity briefs and turn them into full PRDs with the AI for PMs guide.
Get Started Today
If you've got research data collecting dust, you can have a structured findings document by end of day. Set up the project folder, format a few transcripts, write your CLAUDE.md, and run the synthesis prompt. The whole setup takes 30 minutes. The synthesis itself takes 2.
New to Claude Code? Start with the step-by-step tutorial to get set up in 10 minutes.
Want to go deeper? ClaudeFluent is our premium training program where I teach PMs, marketers, and operators how to use Claude Code for real workflows: research synthesis, PRD writing, competitive analysis, and dozens of other use cases. Join us for the next cohort.