ai-wrapper-product

from sickn33/antigravity-awesome-skills

The Ultimate Collection of 200+ Agentic Skills for Claude Code/Antigravity/Cursor. Battle-tested, high-performance skills for AI agents including official skills from Anthropic and Vercel.

3.5K stars817 forksUpdated Jan 26, 2026
npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill ai-wrapper-product

SKILL.md

AI Wrapper Product

Role: AI Product Architect

You know AI wrappers get a bad rap, but the good ones solve real problems. You build products where AI is the engine, not the gimmick. You understand prompt engineering is product development. You balance costs with user experience. You create AI products people actually pay for and use daily.

Capabilities

  • AI product architecture
  • Prompt engineering for products
  • API cost management
  • AI usage metering
  • Model selection
  • AI UX patterns
  • Output quality control
  • AI product differentiation

Patterns

AI Product Architecture

Building products around AI APIs

When to use: When designing an AI-powered product

## AI Product Architecture

### The Wrapper Stack

User Input ↓ Input Validation + Sanitization ↓ Prompt Template + Context ↓ AI API (OpenAI/Anthropic/etc.) ↓ Output Parsing + Validation ↓ User-Friendly Response


### Basic Implementation
```javascript
import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic();

async function generateContent(userInput, context) {
  // 1. Validate input
  if (!userInput || userInput.length > 5000) {
    throw new Error('Invalid input');
  }

  // 2. Build prompt
  const systemPrompt = `You are a ${context.role}.
    Always respond in ${context.format}.
    Tone: ${context.tone}`;

  // 3. Call API
  const response = await anthropic.messages.create({
    model: 'claude-3-haiku-20240307',
    max_tokens: 1000,
    system: systemPrompt,
    messages: [{
      role: 'user',
      content: userInput
    }]
  });

  // 4. Parse and validate output
  const output = response.content[0].text;
  return parseOutput(output);
}

Model Selection

ModelCostSpeedQualityUse Case
GPT-4o$$$FastBestComplex tasks
GPT-4o-mini$FastestGoodMost tasks
Claude 3.5 Sonnet$$FastExcellentBalanced
Claude 3 Haiku$FastestGoodHigh volume

### Prompt Engineering for Products

Production-grade prompt design

**When to use**: When building AI product prompts

```javascript
## Prompt Engineering for Products

### Prompt Template Pattern
```javascript
const promptTemplates = {
  emailWriter: {
    system: `You are an expert email writer.
      Write professional, concise emails.
      Match the requested tone.
      Never include placeholder text.`,
    user: (input) => `Write an email:
      Purpose: ${input.purpose}
      Recipient: ${input.recipient}
      Tone: ${input.tone}
      Key points: ${input.points.join(', ')}
      Length: ${input.length} sentences`,
  },
};

Output Control

// Force structured output
const systemPrompt = `
  Always respond with valid JSON in this format:
  {
    "title": "string",
    "content": "string",
    "suggestions": ["string"]
  }
  Never include any text outside the JSON.
`;

// Parse with fallback
function parseAIOutput(text) {
  try {
    return JSON.parse(text);
  } catch {
    // Fallback: extract JSON from response
    const match = text.match(/\{[\s\S]*\}/);
    if (match) return JSON.parse(match[0]);
    throw new Error('Invalid AI output');
  }
}

Quality Control

TechniquePurpose
Examples in promptGuide output style
Output format specConsistent structure
ValidationCatch malformed responses
Retry logicHandle failures
Fallback modelsReliability

### Cost Management

Controlling AI API costs

**When to use**: When building profitable AI products

```javascript
## AI Cost Management

### Token Economics
```javascript
// Track usage
async function callWithCostTracking(userId, prompt) {
  const response = await anthropic.messages.create({...});

  // Log usage
  await db.usage.create({
    userId,
    inputTokens: response.usage.input_tokens,
    outputTokens: response.usage.output_tokens,
    cost: calculateCost(response.usage),
    model: 'claude-3-haiku',
  });

  return response;
}

function calculateCost(usage) {
  const rates = {
    'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens
  };
  const rate = rates['claude-3-haiku'];
  return (usage.input_tokens * rate.input +
          usage.output_tokens * rate.output) / 1_000_000;
}

Cost Reduction Strategies

StrategySavings
Use cheaper models10-50x
Limit output tokensVariable
Cache common queriesHigh
Batch similar requestsMedium
Truncate inputVariable

Usage Limits

async function checkUsageLimits(userId) {
  const usage = await db.usage.sum({
    where: {
      userId,
      createdAt: { gte: startOfMonth() }
    }
  });

  const limits = await getUserLimits(userId);
  if (usage.cost >= limits.monthlyCost) {
    throw new Error('Monthly limit reached');
  }
  return true;
}

## Anti-Patterns

### ❌ Thin Wrapper Syndrome

**Why bad**: No differ

...
Read full content

Repository Stats

Stars3.5K
Forks817
LicenseMIT License