llm-application-dev-langchain-agent

from rmyndharis/antigravity-skills

A curated collection of Agent Skills for Google Antigravity

140 stars18 forksUpdated Jan 18, 2026
npx skills add https://github.com/rmyndharis/antigravity-skills --skill llm-application-dev-langchain-agent

SKILL.md

LangChain/LangGraph Agent Development Expert

You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.

Use this skill when

  • Working on langchain/langgraph agent development expert tasks or workflows
  • Needing guidance, best practices, or checklists for langchain/langgraph agent development expert

Do not use this skill when

  • The task is unrelated to langchain/langgraph agent development expert
  • You need a different domain or tool outside this scope

Instructions

  • Clarify goals, constraints, and required inputs.
  • Apply relevant best practices and validate outcomes.
  • Provide actionable steps and verification.
  • If detailed examples are required, open resources/implementation-playbook.md.

Context

Build sophisticated AI agent system for: $ARGUMENTS

Core Requirements

  • Use latest LangChain 0.1+ and LangGraph APIs
  • Implement async patterns throughout
  • Include comprehensive error handling and fallbacks
  • Integrate LangSmith for observability
  • Design for scalability and production deployment
  • Implement security best practices
  • Optimize for cost efficiency

Essential Architecture

LangGraph State Management

from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic

class AgentState(TypedDict):
    messages: Annotated[list, "conversation history"]
    context: Annotated[dict, "retrieved context"]

Model & Embeddings

  • Primary LLM: Claude Sonnet 4.5 (claude-sonnet-4-5)
  • Embeddings: Voyage AI (voyage-3-large) - officially recommended by Anthropic for Claude
  • Specialized: voyage-code-3 (code), voyage-finance-2 (finance), voyage-law-2 (legal)

Agent Types

  1. ReAct Agents: Multi-step reasoning with tool usage

    • Use create_react_agent(llm, tools, state_modifier)
    • Best for general-purpose tasks
  2. Plan-and-Execute: Complex tasks requiring upfront planning

    • Separate planning and execution nodes
    • Track progress through state
  3. Multi-Agent Orchestration: Specialized agents with supervisor routing

    • Use Command[Literal["agent1", "agent2", END]] for routing
    • Supervisor decides next agent based on context

Memory Systems

  • Short-term: ConversationTokenBufferMemory (token-based windowing)
  • Summarization: ConversationSummaryMemory (compress long histories)
  • Entity Tracking: ConversationEntityMemory (track people, places, facts)
  • Vector Memory: VectorStoreRetrieverMemory with semantic search
  • Hybrid: Combine multiple memory types for comprehensive context

RAG Pipeline

from langchain_voyageai import VoyageAIEmbeddings
from langchain_pinecone import PineconeVectorStore

# Setup embeddings (voyage-3-large recommended for Claude)
embeddings = VoyageAIEmbeddings(model="voyage-3-large")

# Vector store with hybrid search
vectorstore = PineconeVectorStore(
    index=index,
    embedding=embeddings
)

# Retriever with reranking
base_retriever = vectorstore.as_retriever(
    search_type="hybrid",
    search_kwargs={"k": 20, "alpha": 0.5}
)

Advanced RAG Patterns

  • HyDE: Generate hypothetical documents for better retrieval
  • RAG Fusion: Multiple query perspectives for comprehensive results
  • Reranking: Use Cohere Rerank for relevance optimization

Tools & Integration

from langchain_core.tools import StructuredTool
from pydantic import BaseModel, Field

class ToolInput(BaseModel):
    query: str = Field(description="Query to process")

async def tool_function(query: str) -> str:
    # Implement with error handling
    try:
        result = await external_call(query)
        return result
    except Exception as e:
        return f"Error: {str(e)}"

tool = StructuredTool.from_function(
    func=tool_function,
    name="tool_name",
    description="What this tool does",
    args_schema=ToolInput,
    coroutine=tool_function
)

Production Deployment

FastAPI Server with Streaming

from fastapi import FastAPI
from fastapi.responses import StreamingResponse

@app.post("/agent/invoke")
async def invoke_agent(request: AgentRequest):
    if request.stream:
        return StreamingResponse(
            stream_response(request),
            media_type="text/event-stream"
        )
    return await agent.ainvoke({"messages": [...]})

Monitoring & Observability

  • LangSmith: Trace all agent executions
  • Prometheus: Track metrics (requests, latency, errors)
  • Structured Logging: Use structlog for consistent logs
  • Health Checks: Validate LLM, tools, memory, and external services

Optimization Strategies

  • Caching: Redis for response caching with TTL
  • Connection Pooling: Reuse vector DB connections
  • Load Balancing: Multiple agent workers with round-robin routing
  • Timeout Handling: Set timeouts on all async ope

...

Read full content

Repository Stats

Stars140
Forks18
LicenseMIT License