Назад към всички

Video Generation

// Create AI videos with optimized prompts, motion control, and platform-ready output.

$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.mdreadonly
SKILL.md Frontmatter
nameAI Video Generation
slugvideo-generation
version1.0.1
homepagehttps://clawic.com/skills/video-generation
descriptionCreate AI videos with Sora 2, Veo 3, Seedance, Runway, and modern APIs using reliable prompt and rendering workflows.
changelogAdded current model routing and practical API playbooks for modern AI video generation workflows.
metadata[object Object]

Setup

On first use, read setup.md.

When to Use

User needs to generate, edit, or scale AI videos with current models and APIs. Use this skill to choose the right current model stack, write stronger motion prompts, and run reliable async video pipelines.

Architecture

User preferences persist in ~/video-generation/. See memory-template.md for setup.

~/video-generation/
├── memory.md      # Preferred providers, model routing, reusable shot recipes
└── history.md     # Optional run log for jobs, costs, and outputs

Quick Reference

TopicFile
Initial setupsetup.md
Memory templatememory-template.md
Migration guidemigration.md
Model snapshotbenchmarks.md
Async API patternsapi-patterns.md
OpenAI Sora 2openai-sora.md
Google Veo 3.xgoogle-veo.md
Runway Gen-4runway.md
Luma Rayluma.md
ByteDance Seedanceseedance.md
Klingkling.md
Viduvidu.md
Pika via Falpika.md
MiniMax Hailuominimax-hailuo.md
Replicate routingreplicate.md
Open-source local modelsopen-source-video.md
Distribution playbookpromotion.md

Core Rules

1. Resolve model aliases before API calls

Map community names to real API model IDs first. Examples: sora-2, sora-2-pro, veo-3.0-generate-001, gen4_turbo, gen4_aleph.

2. Route by task, not brand preference

TaskFirst choiceBackup
Premium prompt-only generationsora-2-proveo-3.1-generate-001
Fast drafts at lower costveo-3.1-fast-generate-001gen4_turbo
Long-form cinematic shotsgen4_alephray-2
Strong image-to-video controlveo-3.0-generate-001gen4_turbo
Multi-shot narrative consistencySeedance familyhailuo-2.3
Local privacy-first workflowsWan2.2 / HunyuanVideoCogVideoX

3. Draft cheap, finish expensive

Start with low duration and lower tier, validate motion and composition, then rerender winners with premium models or longer durations.

4. Design prompts as shot instructions

Always include subject, action, camera motion, lens style, lighting, and scene timing. For references and start/end frames, keep continuity constraints explicit.

5. Assume async and failure by default

Every provider pipeline must support queued jobs, polling/backoff, retries, cancellation, and signed-URL download before expiry.

6. Keep a fallback chain

If the preferred model is blocked or overloaded:

  1. same provider lower tier, 2) equivalent cross-provider model, 3) open model/local run.

Common Traps

  • Using nickname-only model labels in code -> avoidable API failures
  • Pushing 8-10 second generations before validating a 3-5 second draft -> wasted credits
  • Cropping after generation instead of generating native ratio -> lower composition quality
  • Ignoring prompt enhancement toggles -> tone drift across providers
  • Reusing expired output URLs -> broken export workflows
  • Treating all providers as synchronous -> stalled jobs and bad timeout handling

External Endpoints

ProviderEndpointData SentPurpose
OpenAIapi.openai.comPrompt text, optional input images/video refsSora 2 video generation
Google Vertex AIaiplatform.googleapis.comPrompt text, optional image input, generation paramsVeo 3.x generation
Runwayapi.dev.runwayml.comPrompt text, optional input mediaGen-4 generation and image-to-video
Lumaapi.lumalabs.aiPrompt text, optional keyframes/start-end imagesRay generation
Falqueue.fal.runPrompt text, optional input mediaPika and Hailuo hosted APIs
Replicateapi.replicate.comPrompt text, optional input mediaMulti-model routing and experimentation
Viduapi.vidu.comPrompt text, optional start/end/reference imagesVidu text/image/reference video APIs
Tencent MPSmps.tencentcloudapi.comPrompt text and generation parametersUnified AIGC video task APIs

No other data is sent externally.

Security & Privacy

Data that leaves your machine:

  • Prompt text
  • Optional reference images or clips
  • Requested rendering parameters (duration, resolution, aspect ratio)

Data that stays local:

  • Provider preferences in ~/video-generation/memory.md
  • Optional local job history in ~/video-generation/history.md

This skill does NOT:

  • Store API keys in project files
  • Upload media outside requested provider calls
  • Delete local assets unless the user asks

Trust

This skill can send prompts and media references to third-party AI providers. Only install if you trust those providers with your content.

Related Skills

Install with clawhub install <slug> if user confirms:

  • image-generation - Build still concepts and keyframes before video generation
  • image-edit - Prepare clean references, masks, and style frames
  • video-edit - Post-process generated clips and final exports
  • video-captions - Add subtitle and text overlay workflows
  • ffmpeg - Compose, transcode, and package production outputs

Feedback

  • If useful: clawhub star video-generation
  • Stay updated: clawhub sync