I’ve been writing about Claude Code skills I use in real life at my GitHub repo. Recently, as I continue developing product ideas, I’ve been facing a challenge that’s probably familiar to most solo builders.
The Problem
Building alone means wearing every hat — including product manager. And I keep hitting the same walls:
Market research is exhausting. Who are the big players? What about indirect competitors? How long have they been around? What’s their pricing across different markets?
Feature gaps are hard to identify objectively. Every competitor does things slightly differently. What actually matters vs. what’s noise?
Positioning decisions feel like guesswork. Where should my product stand — especially against indirect competitors who solve the same problem differently?
Price elasticity is a black box. What are users actually willing to pay? How do competitors price across regions?
I’m not an industry expert. I don’t have a PM team. But I still need to make informed product decisions.
What I wanted: something that could act as my expert product advisor — research competitors autonomously, identify gaps objectively, and help me prioritize what to build based on evidence, not gut feeling.
Exploring Solutions
I considered different approaches:
Manual research — Spreadsheets, bookmarks, notes everywhere. Gets stale fast.
Product analytics tools — Great for existing users, useless for competitive intel.
Hiring a consultant — Expensive for a side project.
Claude Code Skill — Teach Claude the PM workflow once, reuse everywhere.
I went with the skill approach. It fits how I already work — in the terminal, with Claude, on my codebase.
What I Built: Product Management Skill
The skill transforms Claude into an AI-native PM that:
Researches competitors autonomously (pricing, features, positioning, reviews)
Analyzes your codebase to understand what you’ve already built
Identifies gaps with objective scoring
Creates GitHub Issues directly for prioritized features
Generates full PRDs when you need detailed specs
The core philosophy:
WINNING = Pain × Timing × Execution CapabilityFilter aggressively from 50 potential features to 3–5 high-conviction priorities.
Commands Reference
Command | What It Does | When to Use |
|---|---|---|
| Scans codebase + interviews you for product inventory | First command on a new project |
| Research competitor landscape or deep-dive specific competitor | Before gap analysis |
| Identify gaps with WINNING filter scoring | To prioritize what to build |
| Batch create GitHub Issues for approved gaps | Quick filing of multiple gaps |
| Generate full PRD + create GitHub Issue | Detailed spec for priority features |
| Sync local cache with GitHub Issues | Prevent duplicate issues |
/pm:analyze — Know Your Product First
Before researching competitors, you need to understand what you’ve already built.
/pm:analyzeClaude will:
Scan your codebase (routes, components, APIs, models)
Interview you about positioning, target users, differentiators
Generate a product inventory with technical moats and debt flags
Save everything to
.pm/product/
/pm:landscape — Research Competitors
/pm:landscape # Market overview, identify top 5 competitors
/pm:landscape CompetitorX # Deep-dive on specific competitorFor each competitor, Claude researches:
Pricing tiers and regional differences
Feature categorization (Tablestakes vs. Differentiators vs. Emerging)
User sentiment from reviews
Company velocity from changelogs and job postings
/pm:gaps — The WINNING Filter
This is where it gets interesting.
/pm:gapsClaude identifies all gaps between your product and competitors, then scores each with the WINNING filter:
Criterion | Who Scores | Source |
|---|---|---|
Pain Intensity (1-10) | Claude | Review sentiment, support data |
Market Timing (1-10) | Claude | Search trends, competitor velocity |
Execution Capability (1-10) | You | Architecture fit, team skills |
Strategic Fit (1-10) | You | Positioning alignment |
Revenue Potential (1-10) | You | Conversion/retention impact |
Competitive Moat (1-10) | You | Defensibility once built |
Total: X/60 → Decision:
40+ → FILE (high conviction, build it)
25–39 → WAIT (monitor, not urgent)
<25 → SKIP (not worth it)
Output looks like:
| Gap | WINNING | Status | Match |
|----------------|---------|----------|-----------|
| SSO login | 47/60 | EXISTING | #12 (95%) |
| Multi-language | 38/60 | NEW | - |
| PDF export | 52/60 | FILE | - |/pm:file vs /pm:prd — When to Use Which
Aspect |
|
|
|---|---|---|
Input | Gap IDs from | Any feature name |
Output | Lightweight GitHub Issue | Full PRD + GitHub Issue |
Detail | Brief (score + description) | Comprehensive (10+ sections) |
Use for | Batch filing many gaps | Deep spec for priority feature |
Typical workflow:
/pm:gaps # Identify 15 gaps
/pm:file all # Quick: batch file all high-score gaps
/pm:prd <top priority> # Deep: full PRD for #1 priorityThe Magic: Integration with spec-kit
I’ve been an active user of spec-kit for a while now. In my LLM dev workflow, I’m constantly thinking about how to be more efficient without overcomplicating the engineering process. Simplicity is hard to get right — especially when everyone’s chasing the latest LLM tooling.

That’s what makes this pairing work. The PM skill handles WHAT to build and WHY. For HOW to build it, it hands off to spec-kit — GitHub’s open-source spec-driven development tool. No extra abstractions, just a clean handoff.
PM Skill → GitHub Issue → spec-kit
/pm:prd Creates issue /speckit.specifyWhen you run /pm:prd, it:
Generates a comprehensive PRD
Creates a GitHub Issue with all the details
Automatically suggests next steps with spec-kit
The handoff is seamless. No extra commands needed.
Real Example: Adding a New Feature Module
I was working on a product and wanted to add a new feature module that could reuse existing infrastructure. Here’s how the workflow played out:
Step 1: /pm:prd <New Feature>
Claude analyzed my existing codebase, identified reusable components (approval workflows, audit trails, role-based access), and generated a full PRD — complete with user stories by priority, functional requirements, and quality validation.

PRD output showing user stories by priority, functional requirements, and quality validation
Step 2: PRD Complete with GitHub Issue
The PRD was saved locally AND created as a GitHub Issue automatically. It included:
Product pillars with unique differentiators identified
MVP scope with clear P0/P1/P2 priorities
Architecture reuse analysis (what I could leverage vs. build new)

PRD complete showing GitHub issue link, priority, and architecture reuse percentages
Step 3: Automatic spec-kit Handoff
Right after the PRD was created, Claude suggested next steps:

From “I want to add this feature” to a structured GitHub Issue with technical spec ready to generate — in one session.
Data Storage
Everything stays in your repo under .pm/:
.pm/
├── config.md # Positioning, scoring weights
├── product/ # Product inventory, architecture
├── competitors/ # Competitor profiles
├── gaps/ # Gap analyses with scores
├── requests/ # Synced GitHub Issues (dedup)
├── prds/ # Generated PRDs
└── cache/last-updated.json # Staleness trackingVersion-controllable. Searchable. No external dependencies.
Installation
# Add the marketplace
/plugin marketplace add ooiyeefei/ccc
# Install the plugin
/plugin install product-management@cccHow to Use
Start with product analysis, then research, then gaps:
/pm:analyze # Understand your product first
/pm:landscape # Research the competitive landscape
/pm:landscape Competitor # Deep-dive specific competitors
/pm:gaps # Identify and score gaps
/pm:file all # Create GitHub Issues for high-score gaps
/pm:prd <feature> # Generate full PRD for priority featuresWhat I Learned
Objective scoring beats gut feeling. The WINNING filter forces you to think about timing, execution capability, and moat — not just “this would be cool.”
Competitor research compounds. Once you have profiles saved, every future gap analysis gets richer context. The skill checks for staleness and prompts you to refresh when data is old.
The GitHub Issue IS the handoff. No separate documentation step. The PRD lives in the issue, ready for spec-kit or any other implementation workflow.
I’m not a PM, but I have a PM now. For someone who lacks industry expertise, having Claude research competitors, identify trends, and surface patterns I’d miss is genuinely useful. It doesn’t replace domain knowledge, but it accelerates getting there.
Try It Out
Got a product idea that needs competitive analysis? A side project where you’re not sure what to prioritize?
# Install
/plugin marketplace add ooiyeefei/ccc
/plugin install product-management@ccc
# Start with
/pm:analyzeLet me know what works and what breaks. I’m particularly curious how the WINNING filter holds up for different product domains.
Links:
