Text-to-Video API

Generate videos from a pure text prompt. This is Seedance 2.0's most common mode and is ideal for creative ideation, ad scripts, storyboards, short-form content, and any scenario where you don't have visual reference material to start from.

Endpoint

POST https://api.evolink.ai/v1/videos/generations

Model ID: seedance-2.0-text-to-video

For faster generation and lower cost, use seedance-2.0-fast-text-to-video instead — the parameter structure is identical.

Request Parameters

ParameterTypeRequiredDefaultDescription
modelstringYesMust be seedance-2.0-text-to-video
promptstringYesVideo content description. ≤ 500 Chinese characters or ≤ 1000 English words
durationintegerNo5Video duration in seconds, range 415. Billed per second
qualitystringNo720pQuality tier: 480p or 720p. 1080p is not supported
aspect_ratiostringNo16:916:9, 9:16, 1:1, 4:3, 3:4, 21:9, adaptive
generate_audiobooleanNotrueWhether to generate synchronized audio (ambient sound, music, dialogue)
model_params.web_searchbooleanNofalseWhen enabled, the model autonomously decides whether to search the internet for fresh information. Billed only when a search is actually triggered
callback_urlstringNoHTTPS URL for task completion callback. Max 2048 characters, private IPs prohibited

Parameter Details

Writing a good prompt

  • Describe the subject, action, camera language (pan/tilt/zoom/dolly), and lighting atmosphere
  • Wrap dialogue in straight double quotes to trigger dedicated speech synthesis: She turned and said: "You're finally here."
  • Don't request specific aspect-ratio values inside the prompt (e.g. "2.35:1") — use the aspect_ratio field instead

generate_audio: false outputs silent video — no quality loss visually, slightly cheaper on bandwidth. Audio generation itself is free of extra charge.

model_params.web_search: true is useful when:

  • Your prompt references "latest", "today", "this week"-type temporal content
  • You need brand ads that reference real events, people, or places
  • The model will decide internally whether a search is warranted — if it's not needed, no search is performed and nothing extra is billed

Request Examples

cURL

curl -X POST https://api.evolink.ai/v1/videos/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "seedance-2.0-text-to-video",
    "prompt": "A macro lens focuses on a green glass frog on a leaf. The focus gradually shifts from its smooth skin to its completely transparent abdomen, where a bright red heart is beating powerfully and rhythmically.",
    "duration": 8,
    "quality": "720p",
    "aspect_ratio": "16:9",
    "generate_audio": true
  }'

Python

import requests

response = requests.post(
    "https://api.evolink.ai/v1/videos/generations",
    headers={
        "Authorization": "Bearer YOUR_API_KEY",
        "Content-Type": "application/json"
    },
    json={
        "model": "seedance-2.0-text-to-video",
        "prompt": "A luxury watch rotating slowly on a marble surface, soft studio lighting, product showcase, cinematic",
        "duration": 8,
        "quality": "720p",
        "aspect_ratio": "16:9",
        "generate_audio": False
    }
)

task = response.json()
print(f"Task ID: {task['id']}")

Node.js

const res = await fetch("https://api.evolink.ai/v1/videos/generations", {
  method: "POST",
  headers: {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    model: "seedance-2.0-text-to-video",
    prompt: "A cinematic sunset over the ocean, wide angle",
    duration: 5,
    quality: "720p",
    aspect_ratio: "16:9",
    model_params: { web_search: false }
  })
});

const task = await res.json();
console.log("Task ID:", task.id);

Response

A successful submission returns the task object immediately (HTTP 200). Generation has not yet started at this point:

{
    "id": "task-unified-1774857405-abc123",
    "object": "video.generation.task",
    "created": 1774857405,
    "model": "seedance-2.0-text-to-video",
    "status": "pending",
    "progress": 0,
    "type": "video",
    "task_info": {
        "can_cancel": true,
        "estimated_time": 165,
        "video_duration": 8
    },
    "usage": {
        "billing_rule": "per_second",
        "credits_reserved": 50,
        "user_group": "default"
    }
}

Field Reference

FieldDescription
idTask ID — use this for status polling or webhook matching
statuspendingprocessingcompleted / failed
progress0–100 percent
task_info.estimated_timeEstimated seconds until completion
task_info.video_durationRequested video duration
task_info.can_cancelWhether the cancel endpoint can still be called
usage.billing_ruleAlways per_second
usage.credits_reservedReserved credits — actual charge settles when task reaches completed

Retrieving Results

After submission there are two ways to get the final video URL:

  1. PollingGET /v1/tasks/{id} every 5 seconds. See Async Tasks.
  2. Webhook — pass callback_url in the request; the system POSTs the result when the task completes. See Webhooks.

Generated video URLs are valid for 24 hours — download them to your own storage promptly.

FAQ

Why do I get an error when I pass image_urls to text-to-video? Text-to-video doesn't accept any media inputs. If you have reference images, use image-to-video instead.

Can I specify an exact pixel resolution? No. quality only exposes 480p and 720p tiers; the actual pixel dimensions depend on your aspect_ratio and quality combination.

How do I make dialogue sound natural? Wrap the spoken line in straight double quotes. The model automatically detects this as dialogue and runs dedicated speech synthesis instead of treating it as ambient narration.