SDKs y ejemplos de código
La API de Seedance 2.0 utiliza una interfaz REST estándar y se puede llamar desde cualquier cliente HTTP: no se requiere SDK. Esta página ofrece código listo para copiar para los tres modos de generación.
URL base
https://api.evolink.ai
Todos los ejemplos asumen
export EVOLINK_API_KEY="your-api-key-here".
Text-to-Video
Python
import os
import time
import requests
API_KEY = os.environ["EVOLINK_API_KEY"]
BASE_URL = "https://api.evolink.ai"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
# 1. Create task
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-text-to-video",
"prompt": "A cinematic sunset over the ocean, wide shot",
"duration": 5,
"quality": "720p",
"aspect_ratio": "16:9"
}
)
task_id = response.json()["id"]
print(f"Task created: {task_id}")
# 2. Poll
while True:
result = requests.get(f"{BASE_URL}/v1/tasks/{task_id}", headers=headers).json()
if result["status"] == "completed":
print(f"Video URL: {result['results'][0]}")
break
if result["status"] == "failed":
print("Generation failed")
break
print(f"Progress: {result['progress']}%")
time.sleep(5)
Node.js
const API_KEY = process.env.EVOLINK_API_KEY;
const BASE_URL = "https://api.evolink.ai";
const headers = {
"Authorization": `Bearer ${API_KEY}`,
"Content-Type": "application/json"
};
// 1. Create task
const createRes = await fetch(`${BASE_URL}/v1/videos/generations`, {
method: "POST",
headers,
body: JSON.stringify({
model: "seedance-2.0-text-to-video",
prompt: "A cinematic sunset over the ocean, wide shot",
duration: 5,
quality: "720p",
aspect_ratio: "16:9"
})
});
const { id: taskId } = await createRes.json();
console.log(`Task created: ${taskId}`);
// 2. Poll
while (true) {
const res = await fetch(`${BASE_URL}/v1/tasks/${taskId}`, { headers });
const result = await res.json();
if (result.status === "completed") {
console.log(`Video URL: ${result.results[0]}`);
break;
}
if (result.status === "failed") {
console.log("Generation failed");
break;
}
console.log(`Progress: ${result.progress}%`);
await new Promise(r => setTimeout(r, 5000));
}
cURL
# 1. Create task
curl -X POST https://api.evolink.ai/v1/videos/generations \
-H "Authorization: Bearer $EVOLINK_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "seedance-2.0-text-to-video",
"prompt": "A cinematic sunset over the ocean, wide shot",
"duration": 5,
"quality": "720p"
}'
# Response: {"id": "task-unified-...", "status": "pending", ...}
# 2. Query status
curl https://api.evolink.ai/v1/tasks/TASK_ID \
-H "Authorization: Bearer $EVOLINK_API_KEY"
Image-to-Video
Modo de primer fotograma (1 imagen)
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-image-to-video",
"prompt": "The model slowly turns, hair flowing gently in the wind",
"image_urls": ["https://example.com/portrait.jpg"],
"duration": 5,
"quality": "720p",
"aspect_ratio": "adaptive"
}
)
Transición primer-último fotograma (2 imágenes)
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-image-to-video",
"prompt": "A smooth transition from sunrise to sunset over the same ocean",
"image_urls": [
"https://example.com/sunrise.jpg",
"https://example.com/sunset.jpg"
],
"duration": 6,
"quality": "720p"
}
)
Reference-to-Video
Combina activos de referencia de imagen, vídeo y audio en una única solicitud:
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-reference-to-video",
"prompt": (
"Replicate video 1 first-person perspective and pacing; "
"use audio 1 as background music throughout. "
"Scene: a young rider weaving through rain-soaked city streets "
"at night, neon reflections on wet asphalt."
),
"image_urls": ["https://example.com/rider-style.jpg"],
"video_urls": ["https://example.com/pov-reference.mp4"],
"audio_urls": ["https://example.com/synthwave-bgm.mp3"],
"duration": 10,
"quality": "720p",
"aspect_ratio": "16:9"
}
)
Nota:
reference-to-videono dispone de sintaxis de etiquetas tipo@Image1,@Video1. Describe el papel de cada activo en lenguaje natural.
Cómo usar los modelos Fast
Cambia el campo model de seedance-2.0-xxx a seedance-2.0-fast-xxx: el resto de parámetros se mantiene igual:
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-fast-text-to-video", # ← solo este cambio
"prompt": "A cinematic sunset over the ocean, wide shot",
"duration": 5,
"quality": "720p"
}
)
Consulta Modelos Fast.
Webhooks en lugar de polling
response = requests.post(
f"{BASE_URL}/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-text-to-video",
"prompt": "A cat playing piano",
"duration": 5,
"callback_url": "https://yourapp.com/api/video-callback"
}
)
# Tu endpoint de webhook recibe un POST con la misma forma de cuerpo que el endpoint de consulta de tareas
Consulta Webhooks.
Convenciones tipo OpenAI
La API sigue convenciones REST al estilo OpenAI (Bearer token, cuerpo JSON, esquema de respuesta unificado). Usa cualquier biblioteca cliente HTTP: no se necesita ningún SDK específico.