Назад към всички

rocm_vllm_deployment

// Production-ready vLLM deployment on AMD ROCm GPUs. Combines environment auto-check, model parameter detection, Docker Compose deployment, health verification, and functional testing with comprehensive logging and security best practices.

$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.md

Този skill няма публичен SKILL.md файл.

Разгледайте в GitHub