qa-expert
from daymade/claude-code-skills
Professional Claude Code skills marketplace featuring production-ready skills for enhanced development workflows.
npx skills add https://github.com/daymade/claude-code-skills --skill qa-expertSKILL.md
QA Expert
Establish world-class QA testing processes for any software project using proven methodologies from Google Testing Standards and OWASP security best practices.
When to Use This Skill
Trigger this skill when:
- Setting up QA infrastructure for a new or existing project
- Writing standardized test cases (AAA pattern compliance)
- Executing comprehensive test plans with progress tracking
- Implementing security testing (OWASP Top 10)
- Filing bugs with proper severity classification (P0-P4)
- Generating QA reports (daily summaries, weekly progress)
- Calculating quality metrics (pass rate, coverage, gates)
- Preparing QA documentation for third-party team handoffs
- Enabling autonomous LLM-driven test execution
Quick Start
One-command initialization:
python scripts/init_qa_project.py <project-name> [output-directory]
What gets created:
- Directory structure (
tests/docs/,tests/e2e/,tests/fixtures/) - Tracking CSVs (
TEST-EXECUTION-TRACKING.csv,BUG-TRACKING-TEMPLATE.csv) - Documentation templates (
BASELINE-METRICS.md,WEEKLY-PROGRESS-REPORT.md) - Master QA Prompt for autonomous execution
- README with complete quickstart guide
For autonomous execution (recommended): See references/master_qa_prompt.md - single copy-paste command for 100x speedup.
Core Capabilities
1. QA Project Initialization
Initialize complete QA infrastructure with all templates:
python scripts/init_qa_project.py <project-name> [output-directory]
Creates directory structure, tracking CSVs, documentation templates, and master prompt for autonomous execution.
Use when: Starting QA from scratch or migrating to structured QA process.
2. Test Case Writing
Write standardized, reproducible test cases following AAA pattern (Arrange-Act-Assert):
- Read template:
assets/templates/TEST-CASE-TEMPLATE.md - Follow structure: Prerequisites (Arrange) → Test Steps (Act) → Expected Results (Assert)
- Assign priority: P0 (blocker) → P4 (low)
- Include edge cases and potential bugs
Test case format: TC-[CATEGORY]-[NUMBER] (e.g., TC-CLI-001, TC-WEB-042, TC-SEC-007)
Reference: See references/google_testing_standards.md for complete AAA pattern guidelines and coverage thresholds.
3. Test Execution & Tracking
Ground Truth Principle (critical):
- Test case documents (e.g.,
02-CLI-TEST-CASES.md) = authoritative source for test steps - Tracking CSV = execution status only (do NOT trust CSV for test specifications)
- See
references/ground_truth_principle.mdfor preventing doc/CSV sync issues
Manual execution:
- Read test case from category document (e.g.,
02-CLI-TEST-CASES.md) ← always start here - Execute test steps exactly as documented
- Update
TEST-EXECUTION-TRACKING.csvimmediately after EACH test (never batch) - File bug in
BUG-TRACKING-TEMPLATE.csvif test fails
Autonomous execution (recommended):
- Copy master prompt from
references/master_qa_prompt.md - Paste to LLM session
- LLM auto-executes, auto-tracks, auto-files bugs, auto-generates reports
Innovation: 100x faster vs manual + zero human error in tracking + auto-resume capability.
4. Bug Reporting
File bugs with proper severity classification:
Required fields:
- Bug ID: Sequential (BUG-001, BUG-002, ...)
- Severity: P0 (24h fix) → P4 (optional)
- Steps to Reproduce: Numbered, specific
- Environment: OS, versions, configuration
Severity classification:
- P0 (Blocker): Security vulnerability, core functionality broken, data loss
- P1 (Critical): Major feature broken with workaround
- P2 (High): Minor feature issue, edge case
- P3 (Medium): Cosmetic issue
- P4 (Low): Documentation typo
Reference: See BUG-TRACKING-TEMPLATE.csv for complete template with examples.
5. Quality Metrics Calculation
Calculate comprehensive QA metrics and quality gates status:
python scripts/calculate_metrics.py <path/to/TEST-EXECUTION-TRACKING.csv>
Metrics dashboard includes:
- Test execution progress (X/Y tests, Z% complete)
- Pass rate (passed/executed %)
- Bug analysis (unique bugs, P0/P1/P2 breakdown)
- Quality gates status (✅/❌ for each gate)
Quality gates (all must pass for release):
| Gate | Target | Blocker |
|---|---|---|
| Test Execution | 100% | Yes |
| Pass Rate | ≥80% | Yes |
| P0 Bugs | 0 | Yes |
| P1 Bugs | ≤5 | Yes |
| Code Coverage | ≥80% | Yes |
| Security | 90% OWASP | Yes |
6. Progress Reporting
Generate QA reports for stakeholders:
Daily summary (end-of-day):
- Tests executed, pass rate, bugs filed
- Blockers (or None)
- Tomorrow's plan
Weekly report (every Friday):
- Use template:
WEEKLY-PROGRESS-REPORT.md(created by init script) - Compare against baseline:
BASELINE-METRICS.md - Assess quality gates and trends
Reference: See references/llm_prompts_library.md for 30+ ready-to-use reporting prompts.
7. Security Test
...