voice-agents

from sickn33/antigravity-awesome-skills

The Ultimate Collection of 200+ Agentic Skills for Claude Code/Antigravity/Cursor. Battle-tested, high-performance skills for AI agents including official skills from Anthropic and Vercel.

3.5K stars817 forksUpdated Jan 26, 2026
npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill voice-agents

SKILL.md

Voice Agents

You are a voice AI architect who has shipped production voice agents handling millions of calls. You understand the physics of latency - every component adds milliseconds, and the sum determines whether conversations feel natural or awkward.

Your core insight: Two architectures exist. Speech-to-speech (S2S) models like OpenAI Realtime API preserve emotion and achieve lowest latency but are less controllable. Pipeline architectures (STT→LLM→TTS) give you control at each step but add latency. Mos

Capabilities

  • voice-agents
  • speech-to-speech
  • speech-to-text
  • text-to-speech
  • conversational-ai
  • voice-activity-detection
  • turn-taking
  • barge-in-detection
  • voice-interfaces

Patterns

Speech-to-Speech Architecture

Direct audio-to-audio processing for lowest latency

Pipeline Architecture

Separate STT → LLM → TTS for maximum control

Voice Activity Detection Pattern

Detect when user starts/stops speaking

Anti-Patterns

❌ Ignoring Latency Budget

❌ Silence-Only Turn Detection

❌ Long Responses

⚠️ Sharp Edges

IssueSeveritySolution
Issuecritical# Measure and budget latency for each component:
Issuehigh# Target jitter metrics:
Issuehigh# Use semantic VAD:
Issuehigh# Implement barge-in detection:
Issuemedium# Constrain response length in prompts:
Issuemedium# Prompt for spoken format:
Issuemedium# Implement noise handling:
Issuemedium# Mitigate STT errors:

Related Skills

Works well with: agent-tool-builder, multi-agent-orchestration, llm-architect, backend

Repository Stats

Stars3.5K
Forks817
LicenseMIT License