Назад към всички

prompt-sanitizer

// Sanitize prompts before sending to LLMs. Detects PII, prompt injection, toxicity, and off-topic content. Returns cleaned text + risk score. Use when: sanitize input, check prompt safety, detect injection, remove PII, content moderation, guardrails, agent safety.

$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.md

Този skill няма публичен SKILL.md файл.

Разгледайте в GitHub