firecrawl/cli
CLI tool for Firecrawl - scrape, crawl, and extract LLM-ready data from websites
13 stars1 forksUpdated Jan 25, 2026
npx skills add firecrawl/cliREADME
🔥 Firecrawl CLI
Command-line interface for Firecrawl. Scrape, crawl, and extract data from any website directly from your terminal.
Installation
npm install -g firecrawl-cli
If you are using in any AI agent like Claude Code, you can install the skill with:
npx skills add firecrawl/cli
Quick Start
Just run a command - the CLI will prompt you to authenticate if needed:
firecrawl https://example.com
Authentication
On first run, you'll be prompted to authenticate:
🔥 firecrawl cli
Turn websites into LLM-ready data
Welcome! To get started, authenticate with your Firecrawl account.
1. Login with browser (recommended)
2. Enter API key manually
Tip: You can also set FIRECRAWL_API_KEY environment variable
Enter choice [1/2]:
Authentication Methods
# Interactive (prompts automatically when needed)
firecrawl
# Browser login
firecrawl login
# Direct API key
firecrawl login --api-key fc-your-api-key
# Environment variable
export FIRECRAWL_API_KEY=fc-your-api-key
# Per-command API key
firecrawl scrape https://example.com --api-key fc-your-api-key
Commands
scrape - Scrape a single URL
Extract content from any webpage in various formats.
# Basic usage (outputs markdown)
firecrawl https://example.com
firecrawl scrape https://example.com
# Get raw HTML
firecrawl https://example.com --html
firecrawl https://example.com -H
# Multiple formats (outputs JSON)
firecrawl https://example.com --format markdown,links,images
# Save to file
firecrawl https://example.com -o output.md
firecrawl https://example.com --format json -o data.json --pretty
Scrape Options
| Option | Description |
|---|---|
-f, --format <formats> | Output format(s), comma-separated |
-H, --html | Shortcut for --format html |
--only-main-content | Extract only main content (removes navs, footers, etc.) |
--wait-for <ms> | Wait time before scraping (for JS-rendered content) |
--screenshot | Take a screenshot |
--include-tags <tags> | Only include specific HTML tags |
--exclude-tags <tags> | Exclude specific HTML tags |
-o, --output <path> | Save output to file |
--pretty | Pretty print JSON output |
--timing | Show request timing info |
Available Formats
| Format | Description |
|---|---|
markdown | Clean markdown (default) |
html | Cleaned HTML |
rawHtml | Original HTML |
links | All links on the page |
screenshot | Screenshot as base64 |
json | Structured JSON extraction |
Examples
# Extract only main content as markdown
firecrawl https://blog.example.com --only-main-content
# Wait for JS to render, then scrape
firecrawl https://spa-app.com --wait-for 3000
# Get all links from a page
firecrawl https://example.com --format links
# Screenshot + markdown
firecrawl https://example.com --format markdown --screenshot
# Extract specific elements only
firecrawl https://example.com --include-tags article,main
# Exclude navigation and ads
firecrawl https://example.com --exclude-tags nav,aside,.ad
crawl - Crawl an entire website
Crawl multiple pages from a website.
# Start a crawl (returns job ID)
firecrawl crawl https://example.com
# Wait for crawl to complete
firecrawl crawl https://example.com --wait
# With progress indicator
firecrawl crawl https://example.com --wait --progress
# Check crawl status
firecrawl crawl <job-id>
# Limit pages
firecrawl crawl https://example.com --limit 100 --max-depth 3
Crawl Options
| Option | Description |
|---|---|
--wait | Wait for crawl to complete |
--progress | Show progress while waiting |
--limit <n> | Maximum pages to crawl |
--max-depth <n> | Maximum crawl depth |
--include-paths <paths> | Only crawl matching paths |
--exclude-paths <paths> | Skip matching paths |
--sitemap <mode> | include, skip, or only |
--allow-subdomains | Include subdomains |
--allow-external-links | Follow external links |
--crawl-entire-domain |
...
Publisher
Statistics
Stars13
Forks1
Open Issues0
CreatedJan 7, 2026