fluxwing-screenshot-importer

from jackspace/claudeskillz

ClaudeSkillz: For when you need skills, but lazier

8 stars2 forksUpdated Nov 20, 2025
npx skills add https://github.com/jackspace/claudeskillz --skill fluxwing-screenshot-importer

SKILL.md

Fluxwing Screenshot Importer

Import UI screenshots and convert them to the uxscii standard by orchestrating specialized vision agents.

Data Location Rules

READ from (bundled templates - reference only):

  • {SKILL_ROOT}/../uxscii-component-creator/templates/ - 11 component templates (for reference)
  • {SKILL_ROOT}/docs/ - Screenshot processing documentation

WRITE to (project workspace):

  • ./fluxwing/components/ - Extracted components (.uxm + .md)
  • ./fluxwing/screens/ - Screen composition (.uxm + .md + .rendered.md)

NEVER write to skill directories - they are read-only!

Your Task

Import a screenshot of a UI design and automatically generate uxscii components and screens by orchestrating specialized agents:

  1. Vision Coordinator Agent - Spawns 3 parallel vision agents (layout + components + properties)
  2. Component Generator Agents - Generate files in parallel (atomic + composite + screen)

Workflow

Phase 1: Get Screenshot Path

Ask the user for the screenshot path if not provided:

  • "Which screenshot would you like to import?"
  • Validate file exists and is a supported format (PNG, JPG, JPEG, WebP, GIF)
// Example
const screenshotPath = "/path/to/screenshot.png";

Phase 2: Spawn Vision Coordinator Agent

CRITICAL: Spawn the screenshot-vision-coordinator agent to orchestrate parallel vision analysis.

This agent will:

  • Spawn 3 vision agents in parallel (layout discovery + component detection + visual properties)
  • Wait for all agents to complete
  • Merge results into unified component data structure
  • Return JSON with screen metadata, components array, and composition
Task({
  subagent_type: "general-purpose",
  description: "Analyze screenshot with vision analysis",
  prompt: `You are a UI screenshot analyzer extracting component structure for uxscii.

Screenshot path: ${screenshotPath}

Your task:
1. Read the screenshot image file
2. Analyze the UI layout structure (vertical, horizontal, grid, sidebar+main)
3. Detect all UI components (buttons, inputs, navigation, cards, etc.)
4. Extract visual properties (colors, spacing, borders, typography)
5. Identify component hierarchy (atomic vs composite)
6. Merge all findings into a unified data structure
7. Return valid JSON output

CRITICAL detection requirements:
- Do NOT miss navigation elements (check all edges - top, left, right, bottom)
- Do NOT miss small elements (icons, badges, close buttons, status indicators)
- Identify composite components (forms, cards with multiple elements)
- Note spatial relationships between components

Expected output format (valid JSON only, no markdown):
{
  "success": true,
  "screen": {
    "id": "screen-name",
    "type": "dashboard|login|profile|settings",
    "name": "Screen Name",
    "description": "What this screen does",
    "layout": "vertical|horizontal|grid|sidebar-main"
  },
  "components": [
    {
      "id": "component-id",
      "type": "button|input|navigation|etc",
      "name": "Component Name",
      "description": "What it does",
      "visualProperties": {...},
      "isComposite": false
    }
  ],
  "composition": {
    "atomicComponents": ["id1", "id2"],
    "compositeComponents": ["id3"],
    "screenComponents": ["screen-id"]
  }
}

Use your vision capabilities to analyze the screenshot carefully.`
})

Wait for the vision coordinator to complete and return results.

Phase 3: Validate Vision Data

Check the returned data structure:

const visionData = visionCoordinatorResult;

// Required fields
if (!visionData.success) {
  throw new Error(`Vision analysis failed: ${visionData.error}`);
}

if (!visionData.components || visionData.components.length === 0) {
  throw new Error("No components detected in screenshot");
}

// Navigation check (CRITICAL)
const hasNavigation = visionData.components.some(c =>
  c.type === 'navigation' || c.id.includes('nav') || c.id.includes('header')
);

if (visionData.screen.type === 'dashboard' && !hasNavigation) {
  console.warn("⚠️ Dashboard detected but no navigation found - verify all nav elements were detected");
}

Phase 4: Spawn Component Generator Agents (Parallel)

CRITICAL: YOU MUST spawn ALL component generator agents in a SINGLE message with multiple Task tool calls. This is the ONLY way to achieve true parallel execution.

DO THIS: Send ONE message containing ALL Task calls for all components DON'T DO THIS: Send separate messages for each component (this runs them sequentially)

For each atomic component, create a Task call in the SAME message:

Task({
  subagent_type: "general-purpose",
  description: "Generate email-input component",
  prompt: "You are a uxscii component generator. Generate component files from vision data.

Component data: {id: 'email-input', type: 'input', visualProperties: {...}}

Your task:
1. Load schema from {SKILL_ROOT}/../uxscii-component-creator/schemas/uxm-component.schema.json
2. Load d

...
Read full content

Repository Stats

Stars8
Forks2
LicenseMIT License