Choose a Video Generation Model
Use this guide when you need to add video model selection based on the clip your app needs to generate.
By the end, your implementation should have a small model-selection helper that filters models by capability and scores them by priority before submitting a video job.
Not sure what model to use? Copy this prompt to run a model-selection process.
For reusable agent knowledge across projects, install the openrouter-video skill.
Before you start
You need:
- Node.js 20 or newer
- An OpenRouter API key available as
OPENROUTER_API_KEYonly if you submit the optional generation request - A stable, directly downloadable image URL if you test an image-to-video request
Use the API reference pages as the source of truth for exact fields:
- Create video generation request
- List video generation models
- TypeScript SDK video generation reference
Submitting POST /api/v1/videos starts a real video generation job and may
spend OpenRouter credits. Use the model-selection and request-preview steps
first, then submit only when the request is ready.
Step 1: Fetch the video model list
Call the dedicated video model endpoint:
Actual output from the model-list call:
Each model includes the values you need for routing decisions. Use the
List video generation models API
reference as the
source of truth for the endpoint response and model metadata fields. If your app
uses the TypeScript SDK, see the generated listVideosModels SDK
reference
for the SDK method shape.
Step 2: Filter by the job you want to run
Start by translating the product request into model requirements: clip length, output shape, generation mode, audio, deterministic retries, provider-specific controls, and cost. Use the API reference above for the exact metadata fields to inspect before filtering.
For example, this helper finds models that can generate a 720p, vertical, image-to-video clip with first-frame support:
Example output:
At this point, you have models that satisfy the hard requirements. Score the matching set before selecting one.
Step 3: Score the matching models by priority
Use weighted priorities to make the final choice. For example, a draft workflow might prioritize speed and cost, while a production render might prioritize quality and cost:
Actual output from the scoring helper:
Pick the model that best fits your product needs after capability matching.
For example, you might prefer the lowest compatible price, audio support, seed
support, provider-specific controls, a specific provider, or a known latency
profile. The speed score is a slug-based heuristic, and the quality score uses
resolution support as a proxy. Pricing SKU units can differ by provider, so
treat the helper as a quick starting point and inspect the matching model’s
pricing_skus before routing production traffic.
Step 4: Preview the generation request
Before submitting, have the implementation build the exact request body it will send. This makes capability mismatches visible before starting a paid job:
Before submitting, check that your image URL returns 200 with an image
content type:
Example output:
Step 5: Submit when ready
The submission response contains the job id, polling_url, and an initial
status. In a completed run, that submitted job later reached this final state:
Check your work
Before submission, you should see a request body whose model supports every
capability you filtered for. If you submit the request, you should see a
response with a video job id, a polling_url, and an initial status such as
pending. To wait for the playable MP4, use the polling and download helper
from Generate and Download a Video from Text.