Назад към всички

prompt-assemble

// Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase context construction, memory safety valve, and hard limits on memory injection.

$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.md

Този skill няма публичен SKILL.md файл.

Разгледайте в GitHub