agent-audit
from philoserf/claude-code-setup
Comprehensive Claude Code configuration with agents, skills, hooks, and automation
9 stars0 forksUpdated Jan 23, 2026
npx skills add https://github.com/philoserf/claude-code-setup --skill agent-auditSKILL.md
Reference Files
Advanced agent validation guidance:
- model-selection.md - Model choice decision matrix, use cases, and appropriateness criteria
- tool-restrictions.md - Tool permission patterns, security implications, and restriction fit
- focus-area-quality.md - Focus area specificity assessment, quality scoring, and criteria
- approach-methodology.md - Approach completeness, required components, and methodology patterns
- resource-organization.md - Resource directory validation and progressive disclosure
- examples.md - Good vs poor agent comparisons and full audit reports
- report-format.md - Standardized audit report template and structure
- common-issues.md - Frequent problems, fixes, and troubleshooting patterns
Agent Auditor
Validates agent configurations for model selection, tool restrictions, focus areas, and approach methodology.
Quick Start
Basic audit workflow:
- Read agent file
- Check model selection appropriateness
- Validate tool restrictions
- Assess focus area quality
- Review approach methodology
- Generate audit report
Example usage:
User: "Audit my evaluator skill"
→ Reads skills/evaluator/SKILL.md
→ Validates model (Sonnet), tools, focus areas, approach
→ Generates report with findings and recommendations
Agent Audit Checklist
Critical Issues
Must be fixed for agent to function correctly:
- Valid YAML frontmatter - Proper syntax, required fields present
- name field matches filename - Name consistency
- model field present and valid - Sonnet, Haiku, or Opus only
- At least 3 focus areas - Minimum viable expertise definition
- Tool restrictions present - allowed_tools or allowed-patterns specified
- No security vulnerabilities - Tools don't expose dangerous capabilities
- Hooks valid (if present) - Valid event types, proper matcher syntax
Important Issues
Should be fixed for optimal agent performance:
- Model matches complexity - Haiku for simple, Sonnet default, Opus rare
- 5-15 focus areas - Not too few (vague) or too many (unfocused)
- Focus areas specific - Concrete, not generic statements
- Tools match usage - No missing or excessive permissions
- Approach section complete - Methodology defined, output format specified
- File size reasonable - <500 lines or uses progressive disclosure
Nice-to-Have Improvements
Polish for excellent agent quality:
- Model choice justified - Clear reason for non-default model
- Focus areas have examples - Technology/framework specificity
- Approach has decision frameworks - If/then logic for complex tasks
- Tool restrictions documented - Why specific tools are allowed/restricted
- Resource organization - Uses references/ when needed, proper structure
- Context economy - Concise without sacrificing clarity
Audit Workflow
Step 1: Read Agent File
Identify the agent file to audit:
# Single agent
Read skills/evaluator/SKILL.md
# Find all agents
Glob agents/*.md
Step 2: Validate Model Selection
Check model field:
model: sonnet # Good - default choice
model: haiku # Check: Is agent simple enough?
model: opus # Check: Is complexity justified?
Decision criteria:
- Haiku (
haiku): Simple read-only analysis, fast response needed, low cost priority - Sonnet (
sonnet): Default for most agents, balanced cost/capability - Opus (
opus): Complex reasoning required, highest capability needed
Common issues:
- Opus overuse: Using expensive model when Sonnet sufficient
- Haiku underperformance: Too simple for task complexity
- Missing model: No model field specified (defaults to Sonnet)
See model-selection.md for detailed decision matrix.
Step 3: Validate Tool Restrictions
Check allowed_tools or allowed-patterns:
allowed_tools:
- Read
- Grep
- Glob
- Bash
Validation checklist:
- Tools specified: Has allowed_tools field (not unrestricted)
- Tools match usage: All mentioned tools are allowed
- No missing tools: All needed tools are included
- No excessive tools: No unnecessary permissions
- Security implications: No dangerous tool combinations
Common patterns:
- Read-only analyzer: [Read, Grep, Glob, Bash (read commands)]
- Code generator: [Read, Write, Edit, Grep, Glob, Bash]
- Orchestrator: [Task, Skill, Read, AskUserQuestion]
See tool-restrictions.md for security analysis.
Step 3.5: Validate Hooks Configuration (if present)
Check hooks field (optional):
hooks:
PreToolUse:
- matcher: "Bash"
hooks:
- type: command
command
...
Repository Stats
Stars9
Forks0
LicenseMIT License