gpu-cli
// Run ML training, LLM inference, and ComfyUI workflows on remote NVIDIA GPUs (A100, H100, RTX 4090). Cloud GPU compute with smart file sync — prefix any command with 'gpu' to run it remotely.
$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.mdreadonly
SKILL.md Frontmatter
namegpu-cli
descriptionSafely run local `gpu` commands via a guarded wrapper (`runner.sh`) with preflight checks and budget/time caps.
argument-hintrunner.sh gpu [subcommand] [flags]
allowed-toolsBash(runner.sh*), Read
GPU CLI Skill (Stable)
Use this skill to run the local gpu binary from your agent. It only allows invoking the bundled runner.sh (which internally calls gpu) and read-only file access.
What it does
- Runs
gpucommands you specify (e.g.,runner.sh gpu status --json,runner.sh gpu run python train.py). - Recommends a preflight:
gpu doctor --jsonthengpu status --json. - Streams results back to chat; use
--jsonfor structured outputs.
Safety & scope
- Allowed tools:
Bash(runner.sh*),Read. No network access requested by the skill;gpuhandles its own networking. - Avoid chaining or redirection; provide a single
runner.sh gpu …command. - You pay your provider directly; this may start paid pods.
Quick prompts
- "Run
runner.sh gpu status --jsonand summarize pod state". - "Run
runner.sh gpu doctor --jsonand summarize failures". - See
templates/prompts.mdfor more examples.
Security
- Input sanitization: character blocklist (
; & | \( ) > < $ { }+ newlines) plus subcommand allowlist. Commands are executed via directgpubinary invocation — no shell re-evaluation (bash -c/eval`). - See
SECURITY.mdfor the full threat model, permission rationale, and version history.
Notes
- For image/video/LLM work, ask the agent to include appropriate flags (e.g.,
--gpu-type "RTX 4090",-p 8000:8000, or--rebuild).