Text-to-Video API
Generate videos from a pure text prompt. This is Seedance 2.0's most common mode and is ideal for creative ideation, ad scripts, storyboards, short-form content, and any scenario where you don't have visual reference material to start from.
Endpoint
POST https://api.evolink.ai/v1/videos/generations
Model ID: seedance-2.0-text-to-video
For faster generation and lower cost, use
seedance-2.0-fast-text-to-videoinstead — the parameter structure is identical.
Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | Yes | — | Must be seedance-2.0-text-to-video |
prompt | string | Yes | — | Video content description. ≤ 500 Chinese characters or ≤ 1000 English words |
duration | integer | No | 5 | Video duration in seconds, range 4–15. Billed per second |
quality | string | No | 720p | Quality tier: 480p or 720p. 1080p is not supported |
aspect_ratio | string | No | 16:9 | 16:9, 9:16, 1:1, 4:3, 3:4, 21:9, adaptive |
generate_audio | boolean | No | true | Whether to generate synchronized audio (ambient sound, music, dialogue) |
model_params.web_search | boolean | No | false | When enabled, the model autonomously decides whether to search the internet for fresh information. Billed only when a search is actually triggered |
callback_url | string | No | — | HTTPS URL for task completion callback. Max 2048 characters, private IPs prohibited |
Parameter Details
Writing a good prompt
- Describe the subject, action, camera language (pan/tilt/zoom/dolly), and lighting atmosphere
- Wrap dialogue in straight double quotes to trigger dedicated speech synthesis:
She turned and said: "You're finally here." - Don't request specific aspect-ratio values inside the prompt (e.g. "2.35:1") — use the
aspect_ratiofield instead
generate_audio: false outputs silent video — no quality loss visually, slightly cheaper on bandwidth. Audio generation itself is free of extra charge.
model_params.web_search: true is useful when:
- Your prompt references "latest", "today", "this week"-type temporal content
- You need brand ads that reference real events, people, or places
- The model will decide internally whether a search is warranted — if it's not needed, no search is performed and nothing extra is billed
Request Examples
cURL
curl -X POST https://api.evolink.ai/v1/videos/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "seedance-2.0-text-to-video",
"prompt": "A macro lens focuses on a green glass frog on a leaf. The focus gradually shifts from its smooth skin to its completely transparent abdomen, where a bright red heart is beating powerfully and rhythmically.",
"duration": 8,
"quality": "720p",
"aspect_ratio": "16:9",
"generate_audio": true
}'
Python
import requests
response = requests.post(
"https://api.evolink.ai/v1/videos/generations",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "seedance-2.0-text-to-video",
"prompt": "A luxury watch rotating slowly on a marble surface, soft studio lighting, product showcase, cinematic",
"duration": 8,
"quality": "720p",
"aspect_ratio": "16:9",
"generate_audio": False
}
)
task = response.json()
print(f"Task ID: {task['id']}")
Node.js
const res = await fetch("https://api.evolink.ai/v1/videos/generations", {
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "seedance-2.0-text-to-video",
prompt: "A cinematic sunset over the ocean, wide angle",
duration: 5,
quality: "720p",
aspect_ratio: "16:9",
model_params: { web_search: false }
})
});
const task = await res.json();
console.log("Task ID:", task.id);
Response
A successful submission returns the task object immediately (HTTP 200). Generation has not yet started at this point:
{
"id": "task-unified-1774857405-abc123",
"object": "video.generation.task",
"created": 1774857405,
"model": "seedance-2.0-text-to-video",
"status": "pending",
"progress": 0,
"type": "video",
"task_info": {
"can_cancel": true,
"estimated_time": 165,
"video_duration": 8
},
"usage": {
"billing_rule": "per_second",
"credits_reserved": 50,
"user_group": "default"
}
}
Field Reference
| Field | Description |
|---|---|
id | Task ID — use this for status polling or webhook matching |
status | pending → processing → completed / failed |
progress | 0–100 percent |
task_info.estimated_time | Estimated seconds until completion |
task_info.video_duration | Requested video duration |
task_info.can_cancel | Whether the cancel endpoint can still be called |
usage.billing_rule | Always per_second |
usage.credits_reserved | Reserved credits — actual charge settles when task reaches completed |
Retrieving Results
After submission there are two ways to get the final video URL:
- Polling —
GET /v1/tasks/{id}every 5 seconds. See Async Tasks. - Webhook — pass
callback_urlin the request; the system POSTs the result when the task completes. See Webhooks.
Generated video URLs are valid for 24 hours — download them to your own storage promptly.
FAQ
Why do I get an error when I pass image_urls to text-to-video?
Text-to-video doesn't accept any media inputs. If you have reference images, use image-to-video instead.
Can I specify an exact pixel resolution?
No. quality only exposes 480p and 720p tiers; the actual pixel dimensions depend on your aspect_ratio and quality combination.
How do I make dialogue sound natural? Wrap the spoken line in straight double quotes. The model automatically detects this as dialogue and runs dedicated speech synthesis instead of treating it as ambient narration.
Related
- Models Overview — The full 6-model matrix
- Image-to-Video API — If you have reference images
- Reference-to-Video API — Multimodal composition
- Fast Models —
seedance-2.0-fast-text-to-video - Async Tasks / Webhooks