npx skills add rawwerks/rlm-cliREADME
rlm-cli
CLI wrapper for rlm with directory-as-context, JSON-first output, and self-documenting commands.
Upstream RLM: https://github.com/alexzhang13/rlm
Install
One-liner (recommended)
curl -sSL https://raw.githubusercontent.com/rawwerks/rlm-cli/master/install.sh | bash
This clones the repo to ~/.local/share/rlm-cli and symlinks rlm to ~/.local/bin/.
To uninstall:
curl -sSL https://raw.githubusercontent.com/rawwerks/rlm-cli/master/uninstall.sh | bash
uvx (no install)
Run directly without installing:
uvx --from git+https://github.com/rawwerks/rlm-cli.git rlm --help
pipx
pipx install git+https://github.com/rawwerks/rlm-cli.git
Development install
git clone --recurse-submodules https://github.com/rawwerks/rlm-cli.git
cd rlm-cli
uv venv
uv pip install -e .
Claude Code Plugin
This repo includes a Claude Code plugin with an rlm skill. The skill teaches Claude how to use the rlm CLI for code analysis, diff reviews, and codebase exploration.
Installation
Claude Code (Interactive)
/plugin marketplace add rawwerks/rlm-cli
/plugin install rlm@rlm-cli
Claude CLI
claude plugin marketplace add rawwerks/rlm-cli
claude plugin install rlm@rlm-cli
What the skill provides
The /rlm skill gives Claude knowledge of:
- All rlm commands (
ask,complete,search,index,doctor) - Input types (files, directories, URLs, stdin, literal text)
- Common workflows (diff review, codebase analysis, search + analyze)
- Configuration and environment variables
- Exit codes for error handling
Once installed, Claude can use rlm to analyze code, review diffs, and explore codebases when you ask it to.
Authentication
Authentication depends on the backend you choose:
openrouter:OPENROUTER_API_KEYopenai:OPENAI_API_KEYanthropic:ANTHROPIC_API_KEY
Export the appropriate key in your shell environment, for example:
export OPENROUTER_API_KEY=sk-or-...
Usage
Ask about a repo
rlm ask . -q "Summarize this repo" --json
Ask about a URL (auto-Markdown)
rlm ask https://www.anthropic.com/constitution -q "Summarize this page" --json
Same with uvx and OpenRouter:
uvx --from git+https://github.com/rawwerks/rlm-cli.git rlm ask https://www.anthropic.com/constitution -q "Summarize Claude's constitution" --backend openrouter --model google/gemini-3-flash-preview --json
Ask about a file
rlm ask src/rlm_cli/cli.py -q "Explain the CLI flow" --json
Use stdin as context
git diff | rlm ask - -q "Review this diff" --json
No context, just a completion
rlm complete "Write a commit message" --json
OpenRouter quickstart
rlm complete "Say hello" --backend openrouter --model z-ai/glm-4.7:turbo --json
Options
--jsonoutputs JSON only on stdout.--output-format text|jsonsets output format.--backend,--model,--environmentcontrol the RLM backend.--max-iterations Nsets max REPL iterations (default: 30).--max-depth Nenables recursive RLM calls (default: 1, no recursion).--max-budget N.NNlimits spending in USD (requires cost-tracking backend like OpenRouter).--backend-arg/--env-arg/--rlm-arg KEY=VALUEpass extra kwargs.--backend-json/--env-json/--rlm-json @file.jsonmerge JSON kwargs.--literaltreats inputs as literal text;--pathforces filesystem paths.--markitdown/--no-markitdowntoggles URL and non-text conversion to Markdown.--verboseor--debugenables verbose backend logging.--inject-file FILEexecutes Python code between iterations (update variables mid-run).
Early Exit and Cancellation
Ctrl+C (Reply Now)
Pressing Ctrl+C during execution returns the best partial answer as success (exit code 0) instead of raising an error. This is useful when you want to stop waiting but keep what the LLM has produced so far.
rlm ask . -q "Analyze in detail" --max-iterations 20
# Press Ctrl+C after a few iterations
# Output: partial answer with exit_code=0, early_exit=true
In JSON mode, the result includes early_exit and early_exit_reason fields:
{"ok": true, "result": {"response": "...", "early_exit": true, "early_exit_reason": "user_cancelled"}}
SIGUSR1 (Programmatic Early Exit)
Send SIGUSR1 to request graceful early exit without using Ctrl+C:
# In another terminal
kill -SIGUSR1 <rlm_pid>
This is useful for programmatic control over long-running RLM tasks.
--inject-file (Update Variables Mid-Run)
The --inject-file option executes Python code between iterations, allowing you to update REPL variables while the RLM is running.
# Create inject file
echo 'focus = "authentication"' > inject.py
# Start RLM with inject file
rlm ask . -q "Analyze based on the 'focus' variable" --inject-file inject.py
# In another terminal, upd
...