GUIDE
Build a Win/Loss Analysis Tool with Claude Code
Build a win/loss analysis tool in one afternoon with Claude Code. Pull from CRM notes and call transcripts, identify patterns, and generate structured reports without waiting on engineering.
Product marketing managers live in a frustrating loop. You know win/loss analysis matters. You know it would sharpen positioning, fix objection handling, and close the feedback loop between sales and product. But every time you ask engineering to build the tooling, it lands at the bottom of the backlog. Six months later, you're still exporting CSVs and highlighting cells in a spreadsheet.
I've trained over 100 people on Claude Code, and this is one of the projects that gets PMMs most excited. You can build a working win/loss analysis tool in an afternoon, no engineering team required. It pulls from your CRM notes, call transcripts, or plain CSVs, spots the patterns (pricing objections, feature gaps, competitor mentions), and spits out a structured report you can hand to leadership.
What You're Building
The tool has three pieces:
- A data ingestion layer that reads your deal data from CSV files (or JSON exports from your CRM)
- An analysis engine that uses Claude's API to categorize reasons, extract themes, and flag patterns
- A report generator that outputs a clean HTML or Markdown report with win rates by reason, competitor breakdowns, and feature gap rankings
The whole thing runs locally on your machine. No deployment, no infrastructure, no waiting for DevOps.
Step 1: Structure Your Data
First, get your deal data into a format Claude Code can work with. If you're pulling from Salesforce, HubSpot, or any CRM, export as CSV. Here's what the file should look like:
deal_name,outcome,deal_size,competitor,sales_rep,close_date,notes
"Acme Corp Expansion",won,48000,"None","Sarah Chen",2026-01-15,"Champion loved the API flexibility. Technical eval went smooth. Procurement pushed on annual discount but closed at 10% off."
"Bolt Industries",lost,72000,"Competitor X","James Park",2026-01-22,"Lost on price. They went with Competitor X at roughly 60% of our cost. Liked our product better but couldn't justify the premium to CFO."
"Cedar Health",lost,35000,"Competitor Y","Sarah Chen",2026-02-03,"Needed HIPAA compliance docs we didn't have ready. Competitor Y had them on day one. Product fit was strong otherwise."
"Delta Manufacturing",won,91000,"Competitor X","Maria Lopez",2026-02-10,"Won on integrations. They needed SAP and Netsuite connectors. We had both, Competitor X only had SAP. Long sales cycle (4 months)."
"Echo Retail",lost,28000,"None","James Park",2026-02-18,"Went dark after demo. Follow-up revealed they decided to build in-house. Budget was approved but engineering team wanted control."The notes column is where the gold lives. That's free-text from your sales reps, call summaries, or CRM activity logs. The messier and more detailed, the better. Claude is exceptional at pulling signal from unstructured text.
Step 2: Scaffold the Project with Claude Code
Open your terminal, create a project folder, and fire up Claude Code. Start with this prompt:
Create a TypeScript project that analyzes win/loss deal data from a CSV file.
The CSV has columns: deal_name, outcome (won/lost), deal_size, competitor,
sales_rep, close_date, and notes (free-text from sales reps).
Build a script called analyze.ts that:
1. Reads the CSV file passed as a command-line argument
2. Parses each deal and groups them by outcome
3. Uses the Anthropic API to analyze the notes field for each deal,
categorizing the primary reason into buckets like: pricing, features,
competition, compliance, timing, champion, technical_fit
4. Generates a summary report as an HTML file with:
- Overall win rate and average deal size won vs lost
- Top 5 loss reasons ranked by frequency
- Top 5 win reasons ranked by frequency
- Competitor breakdown (wins vs losses per competitor)
- Rep performance comparison
- Feature gaps mentioned across lost deals
Use papaparse for CSV parsing. Store the Anthropic API key in a .env file.Claude Code will scaffold the entire project: package.json, TypeScript config, the analysis script, and a .env.example file. It handles the boring plumbing so you can focus on what the analysis should surface.
If you haven't used Claude Code before, check out the setup guide to get running in about 2 minutes.
Step 3: Refine the Analysis Categories
The first pass will work, but you'll want to tune it. After running the script on your data, look at how Claude categorized each deal. You'll probably notice some categories that should be split or merged. Tell Claude Code:
The analysis is grouping "pricing" and "budget constraints" separately.
Merge those into a single "pricing" category. Also add a new category
called "internal_build" for deals where the prospect decided to build
their own solution instead of buying.
Also, for each categorized deal, include a confidence score and a
one-sentence justification so I can audit the categorization.This iterative refinement is where the best practices pay off. Build incrementally, test, adjust. Each round takes Claude a few minutes. Compare that to filing a ticket, waiting for a sprint, and hoping the implementation matches what you wanted.
Step 4: Generate the Report
The output is an HTML report you can open in any browser or email to stakeholders. Here's what the key sections look like:
=== WIN/LOSS ANALYSIS REPORT ===
Period: January - February 2026
Total Deals: 47 | Won: 28 (59.6%) | Lost: 19 (40.4%)
Avg Deal Size Won: $62,400 | Avg Deal Size Lost: $44,100
--- TOP LOSS REASONS ---
1. Pricing (7 deals, $410K pipeline) - 36.8% of losses
2. Missing Features (4 deals, $195K pipeline) - 21.1% of losses
3. Compliance Gaps (3 deals, $128K pipeline) - 15.8% of losses
4. Internal Build (3 deals, $89K pipeline) - 15.8% of losses
5. Timing (2 deals, $54K pipeline) - 10.5% of losses
--- COMPETITOR BREAKDOWN ---
Competitor X: Won 5, Lost 4 (55.6% win rate)
Competitor Y: Won 2, Lost 6 (25.0% win rate) ⚠️
No Competitor: Won 21, Lost 9 (70.0% win rate)
--- FEATURE GAPS (from lost deals) ---
1. HIPAA compliance documentation (3 mentions)
2. Netsuite integration (2 mentions)
3. SSO with Okta (2 mentions)
4. Custom reporting API (1 mention)That competitor breakdown alone is worth the hour it took to build this. A 25% win rate against Competitor Y screams that you need a targeted battlecard. The feature gaps section feeds directly into your product roadmap conversations.
Step 5: Bolt On Call Transcript Analysis
CSV notes are good. Actual call transcripts are better. If you use Gong, Chorus, or any call recording tool, you can export transcripts and feed them in. Tell Claude Code:
Add a second input mode that accepts a folder of .txt transcript files.
Each filename should follow the pattern: dealname_date_outcome.txt
(e.g., "acme-corp_2026-01-15_won.txt").
For transcripts, extract:
- Direct quotes where the prospect states their decision reasoning
- Objections raised during the call
- Competitor mentions with context
- Sentiment shifts (positive to negative or vice versa)
Include the best direct quotes in the report under each category.Direct quotes from prospects are the most persuasive data you can bring to a product review or sales enablement session. "The CFO said, 'I love the product but I can't justify 40% more than Competitor X'" hits different than "pricing was a factor in several deals."
Step 6: Automate the Recurring Run
A one-time analysis is useful. A monthly automated report is a system. Ask Claude Code to add a scheduling wrapper:
Add a mode that:
1. Pulls the latest CSV export from a specific folder (~/exports/deals/)
2. Runs the analysis
3. Saves the report to ~/reports/ with the date in the filename
4. Compares this month's results to last month's and flags changes:
- Win rate trending up or down
- New competitors appearing
- Loss reasons shifting
- Reps improving or decliningNow you have a monthly win/loss report that generates itself. You export the CSV, run one command, and hand the report to your VP of Sales. The trend data is what makes this sticky: "Pricing losses dropped from 37% to 22% after we introduced the starter tier" is the kind of insight that gets PMMs promoted.
Why This Works Better Than Off-the-Shelf Tools
Clozd, Klue, and other win/loss platforms cost $30K to $100K per year. They're fantastic for enterprise programs. But if you're a PMM at a Series A or B company, you don't have that budget. You have a spreadsheet and good intentions.
The tool you build with Claude Code costs roughly $5 in API calls per analysis run. You own the code. You can customize the categories, the report format, and the data sources to match exactly how your sales org works. And you built it in one afternoon instead of running a 3-month procurement process.
That's the pattern I keep seeing in my ClaudeFluent training sessions. People who couldn't get engineering resources for months are building their own tools in hours. The skillset to do this is learnable. You don't need to become a developer. You need to learn how to describe what you want and iterate with Claude Code until it's right.
Get Started
If you're a PMM or PM who's tired of waiting on engineering for tools like this, ClaudeFluent teaches you how to build them yourself. We run live cohorts where you build real projects (not toy demos) and walk away with skills you'll use every week. Check the homepage for the next session.