GUIDE
Build a Feature Prioritization Tool with Claude Code
PMs spend 2 days every quarter clustering and scoring feature requests in spreadsheets. Claude Code does the deduplication, clustering, and weighted scoring in 30 minutes.
Every quarter, the same ritual. You pull feature requests from Intercom, Zendesk, Slack threads, and that Google Sheet someone started 18 months ago. You copy them into another spreadsheet. You squint at duplicates, try to cluster them into themes, and then spend a full day arguing about scoring criteria with your PM counterpart.
Two days later, you've got a prioritized list. It's already out of date. I've trained over 100 PMs and operators through ClaudeFluent, and the ones managing multiple squads all describe the same pain: the prioritization process itself eats the time they should spend on the actual product decisions.
Claude Code can crunch through hundreds of feature requests in 30 minutes. Not by replacing your judgment, but by handling the sorting, clustering, and scoring grunt work so you can focus on the strategic calls.
Why Spreadsheets Fall Apart at Scale
If you're managing fewer than 20 feature requests, a spreadsheet works fine. But most PMs managing two squads are sitting on 150 to 300 requests per quarter from support tickets, sales calls, customer interviews, and internal stakeholders. At that volume, three things break down:
- Duplicate detection becomes guesswork. "Add dark mode" and "night theme option" and "reduce eye strain at night" are all the same request. You're eyeballing it across 200 rows.
- Clustering is subjective. Does "faster search" belong in the Performance theme or the Search UX theme? You waste time on taxonomy debates instead of priority debates.
- Scoring is inconsistent. You score the first 50 requests carefully, then rush through the last 150 because you're running out of time. The tail end of your list is basically random.
Claude Code fixes all three by reading your raw data, applying consistent logic across every request, and producing a scored, clustered output in minutes.
The Input: Your Feature Request CSV
Start by exporting your feature requests into a CSV. Every tool (Intercom, Zendesk, Productboard, a Google Sheet) can export to CSV. Here's the format that works best:
id,source,customer_name,customer_plan,mrr,request_date,request_text,votes
1,intercom,Acme Corp,Enterprise,4500,2026-01-15,"Need bulk import for contacts from CSV",12
2,zendesk,StartupXYZ,Growth,800,2026-01-18,"Dark mode / night theme for the dashboard",8
3,slack,Internal - Sales,,,2026-01-20,"Customers keep asking for Salesforce integration",15
4,intercom,BigRetail Inc,Enterprise,6200,2026-01-22,"API rate limits too low for our volume",3
5,zendesk,MidCo,Growth,1200,2026-02-01,"Reduce eye strain - dark theme please",5
6,intercom,Acme Corp,Enterprise,4500,2026-02-03,"Bulk upload contacts, ideally drag and drop CSV",2
7,zendesk,SmallBiz,Starter,200,2026-02-10,"Would love a mobile app",22
8,intercom,DataHouse,Enterprise,3800,2026-02-14,"Salesforce sync for contact records",9
9,slack,Internal - CS,,,2026-02-15,"Top churn reason: no reporting customization",0
10,zendesk,GrowthCo,Growth,1500,2026-02-20,"Custom report builder with drag and drop",7The key columns: request_text (what the customer actually said), customer_plan (for weighting by segment), mrr (for revenue impact scoring), and votes (if your tool tracks them). If you don't have MRR data, that's fine. Claude can score without it and you can bolt on revenue weighting later.
The Project Setup
Create a simple folder structure. You'll reuse this every quarter:
feature-prioritization/
├── CLAUDE.md
├── data/
│ └── feature-requests-q1-2026.csv
├── config/
│ └── scoring-weights.md
└── output/
└── (generated reports go here)Your CLAUDE.md
This file gives Claude permanent context about your product so it clusters and scores intelligently:
# Feature Prioritization Tool
## Our Product
- B2B SaaS analytics platform
- 3 plans: Starter ($49/mo), Growth ($149/mo), Enterprise ($499+/mo)
- ICP: VP of Product and Head of Data at 100-500 person SaaS companies
- Current strategic priorities: reduce churn, expand Enterprise tier, improve activation
## Scoring Framework
Use a weighted RICE variant:
- Reach: how many customers requested this (or similar requests)
- Revenue Impact: total MRR of requesting customers + plan tier weighting
- Confidence: how specific and actionable the requests are
- Effort: estimate as S/M/L/XL based on request complexity
## Clustering Rules
- Group semantically similar requests (not just keyword matches)
- A cluster needs at least 2 requests to be valid
- Name each cluster with a clear, action-oriented label
- Flag clusters that align with our strategic prioritiesThe Scoring Weights Config
Drop this into config/scoring-weights.md. This is the file you tweak each quarter based on what your leadership team cares about:
# Scoring Weights (must sum to 100)
- Reach: 25 (number of unique customers requesting)
- Revenue Impact: 30 (total MRR of requesting customers)
- Strategic Alignment: 25 (alignment with current company priorities)
- Effort Efficiency: 20 (impact relative to estimated effort)
## Plan Tier Multipliers
- Enterprise: 3x weight
- Growth: 2x weight
- Starter: 1x weight
- Internal: 1.5x weight (internal requests from Sales/CS carry signal)These weights are opinionated. If your company is in growth mode, crank up Reach and drop Revenue Impact. If you're fighting churn, flip those. The point is you define it once, Claude applies it consistently to every request.
The Prompt That Runs the Analysis
Open Claude Code in the feature-prioritization/ directory and run:
Read data/feature-requests-q1-2026.csv and config/scoring-weights.md.
Do the following:
1. Deduplicate requests - find semantically similar requests and merge them,
keeping a count of how many raw requests mapped to each unique need.
2. Cluster the deduplicated requests into themes.
3. Score each cluster using the weighted RICE framework from CLAUDE.md
and the weights from scoring-weights.md.
4. Rank clusters by total weighted score, highest first.
5. For each cluster, list the top 3 specific requests with customer names and MRR.
Save the output to output/q1-2026-prioritized.md as a formatted report.
Also save output/q1-2026-prioritized.csv with columns:
rank, cluster_name, score, reach, revenue_impact, strategic_alignment,
effort_estimate, top_requestsClaude reads the CSV, applies your weights, and produces both a human-readable report and a CSV you can drop into your roadmap tool or share with leadership.
What the Output Looks Like
Here's a condensed version of what Claude generates:
# Q1 2026 Feature Prioritization Report
## Summary
- 247 raw requests analyzed
- 189 unique requests after deduplication (58 duplicates merged)
- 23 clusters identified
- 5 clusters align with current strategic priorities
---
## Rank 1: CRM Integration (Salesforce + HubSpot)
**Score: 92/100** | Reach: 34 customers | Revenue Impact: $127,400 MRR
| Strategic Alignment: High (Enterprise expansion) | Effort: L
Customers across all tiers are asking for native CRM sync. Enterprise
customers represent 78% of the revenue weight here.
Top requests:
- "Salesforce sync for contact records" — DataHouse (Enterprise, $3,800 MRR)
- "Customers keep asking for Salesforce integration" — Internal Sales
- "HubSpot two-way sync" — GrowthCo (Growth, $1,500 MRR)
---
## Rank 2: Custom Reporting / Report Builder
**Score: 84/100** | Reach: 28 customers | Revenue Impact: $89,200 MRR
| Strategic Alignment: High (reduce churn) | Effort: XL
Top churn driver per CS. 9 of the 28 requests came from customers
who churned in the last 90 days.
Top requests:
- "Top churn reason: no reporting customization" — Internal CS
- "Custom report builder with drag and drop" — GrowthCo (Growth, $1,500)
- "Need to build reports without asking our data team" — BigRetail (Enterprise, $6,200)
---
## Rank 3: Dark Mode / Theming
**Score: 71/100** | Reach: 22 customers | Revenue Impact: $31,600 MRR
| Strategic Alignment: Low | Effort: M
High request volume but low revenue concentration. Mostly Starter
and Growth tier. Quick win if eng capacity allows.That's your quarterly prioritization, ready to walk into a planning meeting. The revenue-weighted scoring surfaces exactly what spreadsheet eyeballing misses: CRM integration isn't the most-requested feature, but it's the highest-value one because Enterprise customers are driving it.
Customizing the Weights
The real power here is in the scoring-weights.md file. Before each quarterly cycle, spend 10 minutes updating it based on leadership priorities. Some examples:
- Churn-fighting quarter: Bump Revenue Impact to 40, add a "churn signal" bonus for requests tied to churned customers. Tell Claude to flag any cluster where more than 20% of requests came from churned accounts.
- Growth quarter: Increase Reach to 35, lower Revenue Impact to 20. You want features that attract new customers, not just retain existing ones.
- Platform quarter: Add an "API/Integration" category bonus of +10 points for any cluster involving integrations or API improvements.
You change a text file and re-run the prompt. The entire re-scoring takes 2 minutes. Try doing that in a spreadsheet with 250 rows of VLOOKUP formulas.
Pulling Data from APIs Instead of CSV
If you want to skip the CSV export step, Claude Code can pull directly from your tools using MCP servers. Intercom, Zendesk, and Linear all have APIs that Claude can hit. The prompt changes slightly:
Connect to our Intercom API (key is in .env) and pull all
conversations tagged "feature-request" from the last 90 days.
Extract the customer name, plan, MRR, and the request text.
Combine with data/manual-requests.csv (requests from Slack and
internal sources that aren't in Intercom).
Then run the full prioritization analysis using the scoring
weights in config/scoring-weights.md.Now your quarterly process is: update scoring weights, run one prompt, review the output. That's it.
Common Mistakes
Skipping the deduplication step. If you don't have Claude merge duplicates first, your reach counts are inflated and your rankings are wrong. Always deduplicate before scoring.
Not including MRR data. Without revenue weighting, you'll optimize for the loudest customers instead of the most valuable ones. Even approximate MRR (by plan tier) is better than nothing.
Treating the output as final. Claude's scoring is a starting point for the conversation, not the end of it. You still need to apply product judgment: "CRM integration scores highest, but we don't have the engineering capacity for an L-sized project this quarter. What's the highest-scoring S or M?" Ask Claude that follow-up and it'll re-filter for you.
The Workflow That Sticks
- Export feature requests from your tools into CSV (or wire up the API).
- Update scoring weights based on this quarter's priorities (10 minutes).
- Run Claude Code with the prioritization prompt (5 minutes).
- Review and iterate. Ask follow-ups like "Show me only Enterprise-weighted clusters" or "What if we remove effort from the scoring?" (15 minutes).
- Share the report with leadership and engineering. Walk into planning with data, not opinions.
Total time: 30 minutes. You were spending two days on this. That's 15 hours back every quarter to spend on the work that actually moves the product forward.
Get Started
If you haven't set up Claude Code yet, the step-by-step tutorial will get you running in 10 minutes. If you want to learn how to set up the skills files that make these workflows repeatable, that guide covers it in detail.
Want hands-on training with workflows like this? ClaudeFluent is the premium program where I teach PMs, PMMs, and operators how to build real tools with Claude Code. Feature prioritization, competitive intel, sprint automation, and more. Join the next cohort and stop burning weekends on spreadsheet busywork.