qa-expert

from daymade/claude-code-skills

Professional Claude Code skills marketplace featuring production-ready skills for enhanced development workflows.

496 stars53 forksUpdated Jan 25, 2026
npx skills add https://github.com/daymade/claude-code-skills --skill qa-expert

SKILL.md

QA Expert

Establish world-class QA testing processes for any software project using proven methodologies from Google Testing Standards and OWASP security best practices.

When to Use This Skill

Trigger this skill when:

  • Setting up QA infrastructure for a new or existing project
  • Writing standardized test cases (AAA pattern compliance)
  • Executing comprehensive test plans with progress tracking
  • Implementing security testing (OWASP Top 10)
  • Filing bugs with proper severity classification (P0-P4)
  • Generating QA reports (daily summaries, weekly progress)
  • Calculating quality metrics (pass rate, coverage, gates)
  • Preparing QA documentation for third-party team handoffs
  • Enabling autonomous LLM-driven test execution

Quick Start

One-command initialization:

python scripts/init_qa_project.py <project-name> [output-directory]

What gets created:

  • Directory structure (tests/docs/, tests/e2e/, tests/fixtures/)
  • Tracking CSVs (TEST-EXECUTION-TRACKING.csv, BUG-TRACKING-TEMPLATE.csv)
  • Documentation templates (BASELINE-METRICS.md, WEEKLY-PROGRESS-REPORT.md)
  • Master QA Prompt for autonomous execution
  • README with complete quickstart guide

For autonomous execution (recommended): See references/master_qa_prompt.md - single copy-paste command for 100x speedup.

Core Capabilities

1. QA Project Initialization

Initialize complete QA infrastructure with all templates:

python scripts/init_qa_project.py <project-name> [output-directory]

Creates directory structure, tracking CSVs, documentation templates, and master prompt for autonomous execution.

Use when: Starting QA from scratch or migrating to structured QA process.

2. Test Case Writing

Write standardized, reproducible test cases following AAA pattern (Arrange-Act-Assert):

  1. Read template: assets/templates/TEST-CASE-TEMPLATE.md
  2. Follow structure: Prerequisites (Arrange) → Test Steps (Act) → Expected Results (Assert)
  3. Assign priority: P0 (blocker) → P4 (low)
  4. Include edge cases and potential bugs

Test case format: TC-[CATEGORY]-[NUMBER] (e.g., TC-CLI-001, TC-WEB-042, TC-SEC-007)

Reference: See references/google_testing_standards.md for complete AAA pattern guidelines and coverage thresholds.

3. Test Execution & Tracking

Ground Truth Principle (critical):

  • Test case documents (e.g., 02-CLI-TEST-CASES.md) = authoritative source for test steps
  • Tracking CSV = execution status only (do NOT trust CSV for test specifications)
  • See references/ground_truth_principle.md for preventing doc/CSV sync issues

Manual execution:

  1. Read test case from category document (e.g., 02-CLI-TEST-CASES.md) ← always start here
  2. Execute test steps exactly as documented
  3. Update TEST-EXECUTION-TRACKING.csv immediately after EACH test (never batch)
  4. File bug in BUG-TRACKING-TEMPLATE.csv if test fails

Autonomous execution (recommended):

  1. Copy master prompt from references/master_qa_prompt.md
  2. Paste to LLM session
  3. LLM auto-executes, auto-tracks, auto-files bugs, auto-generates reports

Innovation: 100x faster vs manual + zero human error in tracking + auto-resume capability.

4. Bug Reporting

File bugs with proper severity classification:

Required fields:

  • Bug ID: Sequential (BUG-001, BUG-002, ...)
  • Severity: P0 (24h fix) → P4 (optional)
  • Steps to Reproduce: Numbered, specific
  • Environment: OS, versions, configuration

Severity classification:

  • P0 (Blocker): Security vulnerability, core functionality broken, data loss
  • P1 (Critical): Major feature broken with workaround
  • P2 (High): Minor feature issue, edge case
  • P3 (Medium): Cosmetic issue
  • P4 (Low): Documentation typo

Reference: See BUG-TRACKING-TEMPLATE.csv for complete template with examples.

5. Quality Metrics Calculation

Calculate comprehensive QA metrics and quality gates status:

python scripts/calculate_metrics.py <path/to/TEST-EXECUTION-TRACKING.csv>

Metrics dashboard includes:

  • Test execution progress (X/Y tests, Z% complete)
  • Pass rate (passed/executed %)
  • Bug analysis (unique bugs, P0/P1/P2 breakdown)
  • Quality gates status (✅/❌ for each gate)

Quality gates (all must pass for release):

GateTargetBlocker
Test Execution100%Yes
Pass Rate≥80%Yes
P0 Bugs0Yes
P1 Bugs≤5Yes
Code Coverage≥80%Yes
Security90% OWASPYes

6. Progress Reporting

Generate QA reports for stakeholders:

Daily summary (end-of-day):

  • Tests executed, pass rate, bugs filed
  • Blockers (or None)
  • Tomorrow's plan

Weekly report (every Friday):

  • Use template: WEEKLY-PROGRESS-REPORT.md (created by init script)
  • Compare against baseline: BASELINE-METRICS.md
  • Assess quality gates and trends

Reference: See references/llm_prompts_library.md for 30+ ready-to-use reporting prompts.

7. Security Test

...

Read full content

Repository Stats

Stars496
Forks53
LicenseMIT License