fact-checker
from daymade/claude-code-skills
Professional Claude Code skills marketplace featuring production-ready skills for enhanced development workflows.
npx skills add https://github.com/daymade/claude-code-skills --skill fact-checkerSKILL.md
Fact Checker
Verify factual claims in documents and propose corrections backed by authoritative sources.
When to use
Trigger when users request:
- "Fact-check this document"
- "Verify these AI model specifications"
- "Check if this information is still accurate"
- "Update outdated data in this file"
- "Validate the claims in this section"
Workflow
Copy this checklist to track progress:
Fact-checking Progress:
- [ ] Step 1: Identify factual claims
- [ ] Step 2: Search authoritative sources
- [ ] Step 3: Compare claims against sources
- [ ] Step 4: Generate correction report
- [ ] Step 5: Apply corrections with user approval
Step 1: Identify factual claims
Scan the document for verifiable statements:
Target claim types:
- Technical specifications (context windows, pricing, features)
- Version numbers and release dates
- Statistical data and metrics
- API capabilities and limitations
- Benchmark scores and performance data
Skip subjective content:
- Opinions and recommendations
- Explanatory prose
- Tutorial instructions
- Architectural discussions
Step 2: Search authoritative sources
For each claim, search official sources:
AI models:
- Official announcement pages (anthropic.com/news, openai.com/index, blog.google)
- API documentation (platform.claude.com/docs, platform.openai.com/docs)
- Developer guides and release notes
Technical libraries:
- Official documentation sites
- GitHub repositories (releases, README)
- Package registries (npm, PyPI, crates.io)
General claims:
- Academic papers and research
- Government statistics
- Industry standards bodies
Search strategy:
- Use model names + specification (e.g., "Claude Opus 4.5 context window")
- Include current year for recent information
- Verify from multiple sources when possible
Step 3: Compare claims against sources
Create a comparison table:
| Claim in Document | Source Information | Status | Authoritative Source |
|---|---|---|---|
| Claude 3.5 Sonnet: 200K tokens | Claude Sonnet 4.5: 200K tokens | ❌ Outdated model name | platform.claude.com/docs |
| GPT-4o: 128K tokens | GPT-5.2: 400K tokens | ❌ Incorrect version & spec | openai.com/index/gpt-5-2 |
Status codes:
- ✅ Accurate - claim matches sources
- ❌ Incorrect - claim contradicts sources
- ⚠️ Outdated - claim was true but superseded
- ❓ Unverifiable - no authoritative source found
Step 4: Generate correction report
Present findings in structured format:
## Fact-Check Report
### Summary
- Total claims checked: X
- Accurate: Y
- Issues found: Z
### Issues Requiring Correction
#### Issue 1: Outdated AI Model Reference
**Location:** Line 77-80 in docs/file.md
**Current claim:** "Claude 3.5 Sonnet: 200K tokens"
**Correction:** "Claude Sonnet 4.5: 200K tokens"
**Source:** https://platform.claude.com/docs/en/build-with-claude/context-windows
**Rationale:** Claude 3.5 Sonnet has been superseded by Claude Sonnet 4.5 (released Sept 2025)
#### Issue 2: Incorrect Context Window
**Location:** Line 79 in docs/file.md
**Current claim:** "GPT-4o: 128K tokens"
**Correction:** "GPT-5.2: 400K tokens"
**Source:** https://openai.com/index/introducing-gpt-5-2/
**Rationale:** 128K was output limit; context window is 400K. Model also updated to GPT-5.2
Step 5: Apply corrections with user approval
Before making changes:
- Show the correction report to the user
- Wait for explicit approval: "Should I apply these corrections?"
- Only proceed after confirmation
When applying corrections:
# Use Edit tool to update document
# Example:
Edit(
file_path="docs/03-写作规范/AI辅助写书方法论.md",
old_string="- Claude 3.5 Sonnet: 200K tokens(约 15 万汉字)",
new_string="- Claude Sonnet 4.5: 200K tokens(约 15 万汉字)"
)
After corrections:
- Verify all edits were applied successfully
- Note the correction summary (e.g., "Updated 4 claims in section 2.1")
- Remind user to commit changes
Search best practices
Query construction
Good queries (specific, current):
- "Claude Opus 4.5 context window 2026"
- "GPT-5.2 official release announcement"
- "Gemini 3 Pro token limit specifications"
Poor queries (vague, generic):
- "Claude context"
- "AI models"
- "Latest version"
Source evaluation
Prefer official sources:
- Product official pages (highest authority)
- API documentation
- Official blog announcements
- GitHub releases (for open source)
Use with caution:
- Third-party aggregators (llm-stats.com, etc.) - verify against official sources
- Blog posts and articles - cross-reference claims
- Social media - only for announcements, verify elsewhere
Avoid:
- Outdated documentation
- Unofficial wikis without citations
- Speculation and rumors
Handling ambiguity
When sources conflict:
- Prioritize most recent official documentation
- Note the discrepancy in the report
- Present both sources to the user
- Re
...