fact-checker

from daymade/claude-code-skills

Professional Claude Code skills marketplace featuring production-ready skills for enhanced development workflows.

499 stars53 forksUpdated Jan 25, 2026
npx skills add https://github.com/daymade/claude-code-skills --skill fact-checker

SKILL.md

Fact Checker

Verify factual claims in documents and propose corrections backed by authoritative sources.

When to use

Trigger when users request:

  • "Fact-check this document"
  • "Verify these AI model specifications"
  • "Check if this information is still accurate"
  • "Update outdated data in this file"
  • "Validate the claims in this section"

Workflow

Copy this checklist to track progress:

Fact-checking Progress:
- [ ] Step 1: Identify factual claims
- [ ] Step 2: Search authoritative sources
- [ ] Step 3: Compare claims against sources
- [ ] Step 4: Generate correction report
- [ ] Step 5: Apply corrections with user approval

Step 1: Identify factual claims

Scan the document for verifiable statements:

Target claim types:

  • Technical specifications (context windows, pricing, features)
  • Version numbers and release dates
  • Statistical data and metrics
  • API capabilities and limitations
  • Benchmark scores and performance data

Skip subjective content:

  • Opinions and recommendations
  • Explanatory prose
  • Tutorial instructions
  • Architectural discussions

Step 2: Search authoritative sources

For each claim, search official sources:

AI models:

  • Official announcement pages (anthropic.com/news, openai.com/index, blog.google)
  • API documentation (platform.claude.com/docs, platform.openai.com/docs)
  • Developer guides and release notes

Technical libraries:

  • Official documentation sites
  • GitHub repositories (releases, README)
  • Package registries (npm, PyPI, crates.io)

General claims:

  • Academic papers and research
  • Government statistics
  • Industry standards bodies

Search strategy:

  • Use model names + specification (e.g., "Claude Opus 4.5 context window")
  • Include current year for recent information
  • Verify from multiple sources when possible

Step 3: Compare claims against sources

Create a comparison table:

Claim in DocumentSource InformationStatusAuthoritative Source
Claude 3.5 Sonnet: 200K tokensClaude Sonnet 4.5: 200K tokens❌ Outdated model nameplatform.claude.com/docs
GPT-4o: 128K tokensGPT-5.2: 400K tokens❌ Incorrect version & specopenai.com/index/gpt-5-2

Status codes:

  • ✅ Accurate - claim matches sources
  • ❌ Incorrect - claim contradicts sources
  • ⚠️ Outdated - claim was true but superseded
  • ❓ Unverifiable - no authoritative source found

Step 4: Generate correction report

Present findings in structured format:

## Fact-Check Report

### Summary
- Total claims checked: X
- Accurate: Y
- Issues found: Z

### Issues Requiring Correction

#### Issue 1: Outdated AI Model Reference
**Location:** Line 77-80 in docs/file.md
**Current claim:** "Claude 3.5 Sonnet: 200K tokens"
**Correction:** "Claude Sonnet 4.5: 200K tokens"
**Source:** https://platform.claude.com/docs/en/build-with-claude/context-windows
**Rationale:** Claude 3.5 Sonnet has been superseded by Claude Sonnet 4.5 (released Sept 2025)

#### Issue 2: Incorrect Context Window
**Location:** Line 79 in docs/file.md
**Current claim:** "GPT-4o: 128K tokens"
**Correction:** "GPT-5.2: 400K tokens"
**Source:** https://openai.com/index/introducing-gpt-5-2/
**Rationale:** 128K was output limit; context window is 400K. Model also updated to GPT-5.2

Step 5: Apply corrections with user approval

Before making changes:

  1. Show the correction report to the user
  2. Wait for explicit approval: "Should I apply these corrections?"
  3. Only proceed after confirmation

When applying corrections:

# Use Edit tool to update document
# Example:
Edit(
    file_path="docs/03-写作规范/AI辅助写书方法论.md",
    old_string="- Claude 3.5 Sonnet: 200K tokens(约 15 万汉字)",
    new_string="- Claude Sonnet 4.5: 200K tokens(约 15 万汉字)"
)

After corrections:

  1. Verify all edits were applied successfully
  2. Note the correction summary (e.g., "Updated 4 claims in section 2.1")
  3. Remind user to commit changes

Search best practices

Query construction

Good queries (specific, current):

  • "Claude Opus 4.5 context window 2026"
  • "GPT-5.2 official release announcement"
  • "Gemini 3 Pro token limit specifications"

Poor queries (vague, generic):

  • "Claude context"
  • "AI models"
  • "Latest version"

Source evaluation

Prefer official sources:

  1. Product official pages (highest authority)
  2. API documentation
  3. Official blog announcements
  4. GitHub releases (for open source)

Use with caution:

  • Third-party aggregators (llm-stats.com, etc.) - verify against official sources
  • Blog posts and articles - cross-reference claims
  • Social media - only for announcements, verify elsewhere

Avoid:

  • Outdated documentation
  • Unofficial wikis without citations
  • Speculation and rumors

Handling ambiguity

When sources conflict:

  1. Prioritize most recent official documentation
  2. Note the discrepancy in the report
  3. Present both sources to the user
  4. Re

...

Read full content

Repository Stats

Stars499
Forks53
LicenseMIT License