effect-ai-language-model

from front-depiction/claude-setup

Reusable Claude Code configuration for Effect TypeScript projects with specialized agents and skills

10 stars4 forksUpdated Jan 19, 2026
npx skills add https://github.com/front-depiction/claude-setup --skill effect-ai-language-model

SKILL.md

Effect AI Language Model

Pattern guide for working with the LanguageModel service from @effect/ai for type-safe LLM interactions with Effect's functional patterns.

Import Patterns

CRITICAL: Always use namespace imports:

import * as LanguageModel from "@effect/ai/LanguageModel"
import * as Prompt from "@effect/ai/Prompt"
import * as Response from "@effect/ai/Response"
import * as Toolkit from "@effect/ai/Toolkit"
import * as Tool from "@effect/ai/Tool"
import * as Effect from "effect/Effect"
import * as Stream from "effect/Stream"
import * as Schema from "effect/Schema"

When to Use This Skill

  • Generating text completions from language models
  • Extracting structured data with schema validation
  • Real-time streaming responses for chat interfaces
  • Tool calling and function execution
  • Multi-turn conversations with history
  • Switching between different AI providers

Service Interface

LanguageModel :: Service

-- Core operations
generateText   :: Options → Effect GenerateTextResponse E R
generateObject :: Options → Schema A → Effect (GenerateObjectResponse A) E R
streamText     :: Options → Stream StreamPart E R

-- Service as dependency
LanguageModel ∈ R → Effect.gen(function*() {
  const model = yield* LanguageModel
  const response = yield* model.generateText(options)
})

generateText Pattern

Basic text generation with optional tool calling:

import * as LanguageModel from "@effect/ai/LanguageModel"
import * as Effect from "effect/Effect"

// Simple text generation
const simple = LanguageModel.generateText({
  prompt: "Explain quantum computing"
})

// With system prompt and conversation history
const withHistory = LanguageModel.generateText({
  prompt: [
    { role: "system", content: "You are a helpful assistant" },
    { role: "user", content: [{ type: "text", text: "Hello!" }] }
  ]
})

// With toolkit for tool calling
const withTools = LanguageModel.generateText({
  prompt: "What's the weather in SF?",
  toolkit: weatherToolkit,
  toolChoice: "auto"  // "none" | "required" | { tool: "name" } | { oneOf: [...] }
})

// Parallel tool call execution
const withConcurrency = LanguageModel.generateText({
  prompt: "Search multiple sources",
  toolkit: searchToolkit,
  concurrency: "unbounded"  // or number for limited parallelism
})

// Disable automatic tool call resolution
const manualTools = LanguageModel.generateText({
  prompt: "Search for X",
  toolkit: searchToolkit,
  disableToolCallResolution: true  // Get tool calls without executing
})

Response Accessors

const response = yield* LanguageModel.generateText({ prompt: "..." })

response.text          // string - concatenated text content
response.toolCalls     // Array<ToolCallParts> - tool invocations
response.toolResults   // Array<ToolResultParts> - tool outputs
response.finishReason  // "stop" | "length" | "tool-calls" | "content-filter" | "unknown"
response.usage         // { inputTokens, outputTokens, totalTokens, reasoningTokens?, cachedInputTokens? }
response.reasoning     // Array<ReasoningPart> - reasoning steps (when model provides extended thinking)
response.reasoningText // string | undefined - concatenated reasoning content

generateObject Pattern (Structured Output)

Force schema-validated output from the model:

import * as LanguageModel from "@effect/ai/LanguageModel"
import * as Schema from "effect/Schema"
import * as Effect from "effect/Effect"

// Define output schema
const ContactSchema = Schema.Struct({
  name: Schema.String,
  email: Schema.String,
  phone: Schema.optional(Schema.String)
})

// Generate structured output
const extractContact = LanguageModel.generateObject({
  prompt: "Extract: John Doe, john@example.com, 555-1234",
  schema: ContactSchema,
  objectName: "contact"  // Optional, aids model understanding
})

// Usage
const program = Effect.gen(function* () {
  const response = yield* extractContact

  response.value  // { name: "John Doe", email: "john@example.com", phone: "555-1234" }
  response.text   // Raw generated text (JSON)
  response.usage  // Token usage stats

  return response.value
})

Schema-driven ADT extraction

const EventType = Schema.TaggedStruct("EventType", {
  _tag: Schema.Literal("meeting", "deadline", "reminder"),
  title: Schema.String,
  date: Schema.String
})

const extractEvent = LanguageModel.generateObject({
  prompt: "Parse: Team meeting on March 15th",
  schema: EventType
})

streamText Pattern

Real-time streaming text generation:

import * as LanguageModel from "@effect/ai/LanguageModel"
import * as Stream from "effect/Stream"
import * as Effect from "effect/Effect"
import * as Console from "effect/Console"

// Basic streaming
const streamStory = LanguageModel.streamText({
  prompt: "Write a story about space exploration"
})

// Process stream parts
const program = streamStory.pipe(
  Stream.runForEach((part) => {
    if (par

...
Read full content

Repository Stats

Stars10
Forks4