benedictking/tavily-web

Powerful web search, extraction, crawling, and research capabilities for Claude Code using Tavily API

1 stars0 forksUpdated Jan 24, 2026
npx skills add benedictking/tavily-web

README

Tavily Web Skill

Claude Code Skill GitHub Stars License: MIT

English | ็ฎ€ไฝ“ไธญๆ–‡

๐ŸŒ Powerful web search, extraction, crawling, and research capabilities for Claude Code using Tavily API!

Introduction

Tavily Web Skill is a comprehensive Claude Code skill that provides advanced web interaction capabilities through the Tavily API. It enables intelligent web search, content extraction, site crawling, URL discovery, and structured research tasks.

Key Features

  • ๐Ÿ” Web Search: Perform intelligent web searches with customizable parameters
  • ๐Ÿ“„ Content Extraction: Extract clean content from specific URLs
  • ๐Ÿ•ท๏ธ Web Crawling: Crawl websites to discover and extract content
  • ๐Ÿ—บ๏ธ Site Mapping: Discover all URLs within a website
  • ๐Ÿ“Š Research Tasks: Create structured research reports on any topic
  • ๐ŸŽฏ Smart Triggers: Automatically activates when web research is needed
  • ๐ŸŒ Bilingual Support: Supports both English and Chinese trigger keywords

Quick Start

Set up in 5 minutes

Installation

Option 1: Install via skills CLI (Recommended)

The easiest way to install this skill is using the skills CLI tool:

# Install globally to all detected agents (Claude Code, Cursor, Codex, etc.)
npx skills add -g BenedictKing/tavily-web

# Or install to current project only
npx skills add BenedictKing/tavily-web

The skill will be automatically installed to ~/.claude/skills/tavily-web and loaded by Claude Code.

Option 2: Manual Installation via Git Clone

If you prefer manual installation or want to customize the setup:

1. Clone the Repository

# Clone to Claude Code's skills directory
git clone https://github.com/BenedictKing/tavily-web.git ~/.claude/skills/tavily-web

# Or clone to your preferred location
git clone https://github.com/BenedictKing/tavily-web.git
cd tavily-web

2. Get API Key

Visit tavily.com to register and get your API key.

3. Configure API Key

Create a .env file in the skill directory:

cd .claude/skills/tavily-web
cp .env.example .env

Edit the .env file and add your API key:

TAVILY_API_KEY=your_actual_api_key_here

4. Test the Script

Verify your configuration:

# Search the web
node .claude/skills/tavily-web/tavily-api.js search "latest AI developments"

# Extract content from a URL
node .claude/skills/tavily-web/tavily-api.js extract "https://example.com"

If you see JSON responses, your setup is successful!

Usage

The skill can be invoked manually or activates automatically when web research is needed.

Manual Invocation

Use the skill name directly:

You: /tavily-web search "React 19 new features"

Auto-Trigger

The skill automatically activates when detecting these keywords:

Search Queries

  • Chinese: ๆœ็ดขใ€ๆŸฅๆ‰พใ€็ฝ‘้กตๆœ็ดข
  • English: search, find, web search, look up

Content Extraction

  • Chinese: ๆๅ–ใ€ๆŠ“ๅ–ๅ†…ๅฎน
  • English: extract, fetch content, get content

Web Crawling

  • Chinese: ็ˆฌๅ–ใ€้ๅކ็ฝ‘็ซ™
  • English: crawl, spider, scrape

Research Tasks

  • Chinese: ็ ”็ฉถใ€่ฐƒ็ ”ใ€ๅˆ†ๆž
  • English: research, investigate, analyze

Available Commands

1. Search (search)

Perform intelligent web searches:

node tavily-api.js search "query" [options]

Options:
  --max-results <n>     Maximum number of results (default: 5)
  --include-domains     Comma-separated domains to include
  --exclude-domains     Comma-separated domains to exclude
  --search-depth        Search depth: basic or advanced

Example:

You: Search for "Next.js 15 middleware examples"
Claude: [Automatically calls Tavily search API]

2. Extract (extract)

Extract clean content from URLs:

node tavily-api.js extract "url1,url2,url3"

Example:

You: Extract content from https://example.com/article
Claude: [Automatically calls Tavily extract API]

3. Crawl (crawl)

Crawl websites to discover and extract content:

node tavily-api.js crawl "url" [options]

Options:
  --max-pages <n>       Maximum pages to crawl (default: 10)
  --include-patterns    URL patterns to include
  --exclude-patterns    URL patterns to exclude

Example:

You: Crawl https://docs.example.com for documentation
Claude: [Automatically calls Tavily crawl API]

4. Map (map)

Discover all URLs within a website:

node tavily-api.js map "url" [options]

Options:
  --max-urls <n>        Maximum URLs to discover (default: 100)
  --filter-pattern      Pattern to filter URLs

Example:

You: Map all URLs on https://example.com
Claude: [Automatically calls Tavily map API]

5. Research (research)

Crea

...

Read full README

Publisher

benedictkingbenedictking

Statistics

Stars1
Forks0
Open Issues0
LicenseMIT License
CreatedJan 20, 2026