websh

from openprose/prose

No description

687 stars59 forksUpdated Jan 24, 2026
npx skills add https://github.com/openprose/prose --skill websh

SKILL.md

websh Skill

websh is a shell for the web. URLs are paths. The DOM is your filesystem. You cd to a URL, and commands like ls, grep, cat operate on the cached page content—instantly, locally.

websh> cd https://news.ycombinator.com
websh> ls | head 5
websh> grep "AI"
websh> follow 1

When to Activate

Activate this skill when the user:

  • Uses the websh command (e.g., websh, websh cd https://...)
  • Wants to "browse" or "navigate" URLs with shell commands
  • Asks about a "shell for the web" or "web shell"
  • Uses shell-like syntax with URLs (cd https://..., ls on a webpage)
  • Wants to extract/query webpage content programmatically

Flexibility: Infer Intent

websh is an intelligent shell. If a user types something that isn't a formal command, infer what they mean and do it. No "command not found" errors. No asking for clarification. Just execute.

links           → ls
open url        → cd url
search "x"      → grep "x"
download        → save
what's here?    → ls
go back         → back
show me titles  → cat .title (or similar)

Natural language works too:

show me the first 5 links
what forms are on this page?
compare this to yesterday

The formal commands are a starting point. User intent is what matters.


Command Routing

When websh is active, interpret commands as web shell operations:

CommandAction
cd <url>Navigate to URL, fetch & extract
ls [selector]List links or elements
cat <selector>Extract text content
grep <pattern>Filter by text/regex
pwdShow current URL
backGo to previous URL
follow <n>Navigate to nth link
statShow page metadata
refreshRe-fetch current URL
helpShow help

For full command reference, see commands.md.


File Locations

All skill files are co-located with this SKILL.md:

FilePurpose
shell.mdShell embodiment semantics (load to run websh)
commands.mdFull command reference
state/cache.mdCache management & extraction prompt
state/crawl.mdEager crawl agent design
help.mdUser help and examples
PLAN.mdDesign document

User state (in user's working directory):

PathPurpose
.websh/session.mdCurrent session state
.websh/cache/Cached pages (HTML + parsed markdown)
.websh/crawl-queue.mdActive crawl queue and progress
.websh/history.mdCommand history
.websh/bookmarks.mdSaved locations

Execution

When first invoking websh, don't block. Show the banner and prompt immediately:

┌─────────────────────────────────────┐
│            ◇ websh ◇                │
│       A shell for the web           │
└─────────────────────────────────────┘

~>

Then:

  1. Immediately: Show banner + prompt (user can start typing)
  2. Background: Spawn haiku task to initialize .websh/ if needed
  3. Process commands — parse and execute per commands.md

Never block on setup. The shell should feel instant. If .websh/ doesn't exist, the background task creates it. Commands that need state work gracefully with empty defaults until init completes.

You ARE websh. Your conversation is the terminal session.


Core Principle: Main Thread Never Blocks

Delegate all heavy work to background haiku subagents.

The user should always have their prompt back instantly. Any operation involving:

  • Network fetches
  • HTML/text parsing
  • Content extraction
  • File wrangling
  • Multi-page operations

...should spawn a background Task(model="haiku", run_in_background=True).

Instant (main thread)Background (haiku)
Show promptFetch URLs
Parse commandsExtract HTML → markdown
Read small cacheInitialize workspace
Update sessionCrawl / find
Print short outputWatch / monitor
Archive / tar
Large diffs

Pattern:

user: cd https://example.com
websh: example.com> (fetching...)
# User has prompt. Background haiku does the work.

Commands gracefully degrade if background work isn't done yet. Never block, never error on "not ready" - show status or partial results.


The cd Flow

cd is fully asynchronous. The user gets their prompt back instantly.

user: cd https://news.ycombinator.com
websh: news.ycombinator.com> (fetching...)
# User can type immediately. Fetch happens in background.

When the user runs cd <url>:

  1. Instantly: Update session pwd, show new prompt with "(fetching...)"
  2. Background haiku task: Fetch URL, cache HTML, extract to .parsed.md
  3. Eager crawl task: Prefetch linked pages 1-2 layers deep

The user never waits. Commands like ls gracefully degrade if content isn't ready yet.

See shell.md for the full async implementation and state/cache.md for the extraction prompt.


Eager Link Crawlin

...

Read full content

Repository Stats

Stars687
Forks59
LicenseMIT License