local-first-llm
// Routes LLM requests to a local model (Ollama, LM Studio, llamafile) before falling back to cloud APIs. Tracks token savings and cost avoidance in a persistent dashboard. Use when: (1) user asks to run a task with a local model first, (2) user wants to reduce cloud API costs or keep requests private,
$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.md
Този skill няма публичен SKILL.md файл.
Разгледайте в GitHub