aimlapi-safety
// Content moderation and safety checks. Instantly classify text or images as safe or unsafe using AI guardrails.
$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.mdreadonly
SKILL.md Frontmatter
nameaimlapi-safety
descriptionContent moderation and safety checks. Instantly classify text or images as safe or unsafe using AI guardrails.
envAIMLAPI_API_KEY
primaryEnvAIMLAPI_API_KEY
AIMLAPI Safety
Overview
Use "AI safety models" (Guard models) to ensure content compliance. Perfect for moderating user input or chatbot responses.
Quick start
export AIMLAPI_API_KEY="sk-..."
python scripts/check_safety.py --content "How to make a bomb"
Tasks
Check Text Safety
python scripts/check_safety.py --content "I want to learn about security" --model meta-llama/Llama-Guard-3-8B
Supported Models
meta-llama/Llama-Guard-3-8B(Default)- Other Llama-Guard variants on AIMLAPI.