build-quality-gates

from zpankz/mcp-skillset

MCP Skillset - Claude Code skills, references, and knowledge base

1 stars0 forksUpdated Jan 15, 2026
npx skills add https://github.com/zpankz/mcp-skillset --skill build-quality-gates

SKILL.md

Build Quality Gates Implementation

Overview & Scope

This skill provides a systematic methodology for implementing comprehensive build quality gates using the BAIME (Bootstrapped AI Methodology Engineering) framework. It transforms chaotic build processes into predictable, high-quality delivery systems through quantitative, evidence-based optimization.

What You'll Achieve

  • 98% Error Coverage: Prevent nearly all common build and commit errors
  • 17.4s Detection: Find issues locally before CI (vs 8+ minutes in CI)
  • 87.5% CI Failure Reduction: From 40% failure rate to 5%
  • Standardized Workflows: Consistent quality checks across all team members
  • Measurable Improvement: Quantitative metrics track your progress

Scope

In Scope:

  • Pre-commit quality gates
  • CI/CD pipeline integration
  • Multi-language build systems (Go, Python, JavaScript, etc.)
  • Automated error detection and prevention
  • Performance optimization and monitoring

Out of Scope:

  • Application-level testing strategies
  • Deployment automation
  • Infrastructure monitoring
  • Security scanning (can be added as extensions)

Prerequisites & Dependencies

System Requirements

  • Build System: Any project with Make, CMake, npm, or similar build tool
  • CI/CD: GitHub Actions, GitLab CI, Jenkins, or similar
  • Version Control: Git (for commit hooks and integration)
  • Shell Access: Bash or similar shell environment

Optional Tools

  • Language-Specific Linters: golangci-lint, pylint, eslint, etc.
  • Static Analysis Tools: shellcheck, gosec, sonarqube, etc.
  • Dependency Management: go mod, npm, pip, etc.

Team Requirements

  • Development Workflow: Standard Git-based development process
  • Quality Standards: Willingness to enforce quality standards
  • Continuous Improvement: Commitment to iterative improvement

Implementation Phases

This skill follows the validated BAIME 3-iteration approach: P0 (Critical) → P1 (Enhanced) → P2 (Optimization).

Phase 1: Baseline Analysis (Iteration 0)

Duration: 30-60 minutes Objective: Quantify your current build quality problems

Step 1: Collect Historical Error Data

# Analyze recent CI failures (last 20-50 runs)
# For GitHub Actions:
gh run list --limit 50 --json status,conclusion,databaseId,displayTitle,workflowName

# For GitLab CI:
# Check pipeline history in GitLab UI

# For Jenkins:
# Check build history in Jenkins UI

Step 2: Categorize Error Types

Create a spreadsheet with these categories:

  • Temporary Files: Debug scripts, test files left in repo
  • Missing Dependencies: go.mod/package.json inconsistencies
  • Import/Module Issues: Unused imports, incorrect paths
  • Test Infrastructure: Missing fixtures, broken test setup
  • Code Quality: Linting failures, formatting issues
  • Build Configuration: Makefile, Dockerfile issues
  • Environment: Version mismatches, missing tools

Step 3: Calculate Baseline Metrics

# Calculate your baseline V_instance
baseline_ci_failure_rate=$(echo "scale=2; failed_builds / total_builds" | bc)
baseline_avg_iterations="3.5"  # Typical: 3-4 iterations per successful build
baseline_detection_time="480"   # Typical: 5-10 minutes in CI
baseline_error_coverage="0.3"   # Typical: 30% with basic linting

V_instance_baseline=$(echo "scale=3;
  0.4 * (1 - $baseline_ci_failure_rate) +
  0.3 * (1 - $baseline_avg_iterations/4) +
  0.2 * (600/$baseline_detection_time) +
  0.1 * $baseline_error_coverage" | bc)

echo "Baseline V_instance: $V_instance_baseline"

Expected Baseline: V_instance ≈ 0.4-0.6

Deliverables

  • Error analysis spreadsheet
  • Baseline metrics calculation
  • Problem prioritization matrix

Phase 2: P0 Critical Checks (Iteration 1)

Duration: 2-3 hours Objective: Implement checks that prevent the most common errors

Step 1: Create P0 Check Scripts

Script Template:

#!/bin/bash
# check-[category].sh - [Purpose]
#
# Part of: Build Quality Gates
# Iteration: P0 (Critical Checks)
# Purpose: [What this check prevents]
# Historical Impact: [X% of historical errors]

set -euo pipefail

# Colors
RED='\033[0;31m'
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
NC='\033[0m'

echo "Checking [category]..."

ERRORS=0

# ============================================================================
# Check [N]: [Specific check name]
# ============================================================================
echo "  [N/total] Checking [specific pattern]..."

# Your check logic here
if [ condition ]; then
    echo -e "${RED}❌ ERROR: [Description]${NC}"
    echo "[Found items]"
    echo ""
    echo "Fix instructions:"
    echo "  1. [Step 1]"
    echo "  2. [Step 2]"
    echo ""
    ((ERRORS++)) || true
fi

# ============================================================================
# Summary
# ============================================================================
if [ $ERRORS -eq 0 ]; the

...
Read full content

Repository Stats

Stars1
Forks0