decision-matrix
from lyndonkl/claude
Agents, skills and anything else to use with claude
npx skills add https://github.com/lyndonkl/claude --skill decision-matrixSKILL.md
Decision Matrix
What Is It?
A decision matrix is a structured tool for comparing multiple alternatives against weighted criteria to make transparent, defensible choices. It forces explicit trade-off analysis by scoring each option on each criterion, making subjective factors visible and comparable.
Quick example:
| Option | Cost (30%) | Speed (25%) | Quality (45%) | Weighted Score |
|---|---|---|---|---|
| Option A | 8 (2.4) | 6 (1.5) | 9 (4.05) | 7.95 ← Winner |
| Option B | 6 (1.8) | 9 (2.25) | 7 (3.15) | 7.20 |
| Option C | 9 (2.7) | 4 (1.0) | 6 (2.7) | 6.40 |
The numbers in parentheses show criterion score × weight. Option A wins despite not being fastest or cheapest because quality matters most (45% weight).
Workflow
Copy this checklist and track your progress:
Decision Matrix Progress:
- [ ] Step 1: Frame the decision and list alternatives
- [ ] Step 2: Identify and weight criteria
- [ ] Step 3: Score each alternative on each criterion
- [ ] Step 4: Calculate weighted scores and analyze results
- [ ] Step 5: Validate quality and deliver recommendation
Step 1: Frame the decision and list alternatives
Ask user for decision context (what are we choosing and why), list of alternatives (specific named options, not generic categories), constraints or dealbreakers (must-have requirements), and stakeholders (who needs to agree). Understanding must-haves helps filter options before scoring. See Framing Questions for clarification prompts.
Step 2: Identify and weight criteria
Collaborate with user to identify criteria (what factors matter for this decision), determine weights (which criteria matter most, as percentages summing to 100%), and validate coverage (do criteria capture all important trade-offs). If user is unsure about weighting → Use resources/template.md for weighting techniques. See Criterion Types for common patterns.
Step 3: Score each alternative on each criterion
For each option, score on each criterion using consistent scale (typically 1-10 where 10 = best). Ask user for scores or research objective data (cost, speed metrics) where available. Document assumptions and data sources. For complex scoring → See resources/methodology.md for calibration techniques.
Step 4: Calculate weighted scores and analyze results
Calculate weighted score for each option (sum of criterion score × weight). Rank options by total score. Identify close calls (options within 5% of each other). Check for sensitivity (would changing one weight flip the decision). See Sensitivity Analysis for interpretation guidance.
Step 5: Validate quality and deliver recommendation
Self-assess using resources/evaluators/rubric_decision_matrix.json (minimum score ≥ 3.5). Present decision-matrix.md file with clear recommendation, highlight key trade-offs revealed by analysis, note sensitivity to assumptions, and suggest next steps (gather more data on close calls, validate with stakeholders).
Framing Questions
To clarify the decision:
- What specific decision are we making? (Choose X from Y alternatives)
- What happens if we don't decide or choose wrong?
- When do we need to decide by?
- Can we choose multiple options or only one?
To identify alternatives:
- What are all the named options we're considering?
- Are there other alternatives we're ruling out immediately? Why?
- What's the "do nothing" or status quo option?
To surface must-haves:
- Are there absolute dealbreakers? (Budget cap, timeline requirement, compliance need)
- Which constraints are flexible vs rigid?
Criterion Types
Common categories for criteria (adapt to your decision):
Financial Criteria:
- Upfront cost, ongoing cost, ROI, payback period, budget impact
- Typical weight: 20-40% (higher for cost-sensitive decisions)
Performance Criteria:
- Speed, quality, reliability, scalability, capacity, throughput
- Typical weight: 30-50% (higher for technical decisions)
Risk Criteria:
- Implementation risk, reversibility, vendor lock-in, technical debt, compliance risk
- Typical weight: 10-25% (higher for enterprise/regulated environments)
Strategic Criteria:
- Alignment with goals, future flexibility, competitive advantage, market positioning
- Typical weight: 15-30% (higher for long-term decisions)
Operational Criteria:
- Ease of use, maintenance burden, training required, integration complexity
- Typical weight: 10-20% (higher for internal tools)
Stakeholder Criteria:
- Team preference, user satisfaction, executive alignment, customer impact
- Typical weight: 5-15% (higher for change management contexts)
Weighting Approaches
Method 1: Direct Allocation (simplest) Stakeholders assign percentages totaling 100%. Quick but can be arbitrary.
...