Назад към всички

glitchward-llm-shield

// Scan prompts for prompt injection attacks before sending them to any LLM. Detect jailbreaks, data exfiltration, encoding bypass, multilingual attacks, and 25+ attack categories using Glitchward's LLM Shield API.

$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.md

Този skill няма публичен SKILL.md файл.

Разгледайте в GitHub