How to Replicate Camera Movements with Seedance 2.0 API
Learn to replicate Hitchcock zooms, one-take tracking shots & orbital cameras using Seedance 2.0 API. 3 complete Python examples with @Video tags.

Camera movement is what separates a flat, static video from something that feels cinematic. A dolly zoom creates tension. An orbital shot adds grandeur. A one-take tracking shot builds immersion. Traditionally, achieving these requires expensive equipment — gimbals, cranes, drones, Steadicams — plus an operator who knows how to use them.
Seedance 2.0 eliminates the hardware. Upload a reference video that contains the camera movement you want, tell the model what to do with it via @Video tags, and the API generates new content that replicates the exact camera language — the speed, the trajectory, the rhythm, the acceleration curves.

This tutorial walks you through three complete camera replication cases using the Seedance 2.0 API via EvoLink:
- One-take tracking shot — a continuous camera follow through multiple environments
- Hitchcock zoom (dolly zoom) — the classic vertigo effect
- Orbital camera — a 360° rotating shot around a subject
Each case includes a complete Python script you can copy, paste, and run.
Prerequisites: Python 3.8+, an EvoLink API key (free tier available), and a reference video for each camera movement type.
Why Camera Movement Replication Changes AI Video
Most AI video generators give you basic text-based camera control. You type "dolly in" or "pan left" and hope the model interprets it correctly. The results are inconsistent — sometimes you get a smooth push-in, sometimes a jerky pan, sometimes nothing changes at all.
Seedance 2.0 takes a fundamentally different approach: show, don't tell. Instead of describing camera movement in words, you upload a video that demonstrates the exact movement you want. The model analyzes the reference and reproduces:
- Camera trajectory — tracking paths, orbital arcs, crane movements
- Speed and acceleration — ease-in, ease-out, sudden stops, smooth glides
- Focal behavior — rack focus timing, depth-of-field shifts
- Compositional rhythm — how long each framing holds before the camera moves
This means you can take a camera movement from a Hollywood film, a drone shot from YouTube, or a gimbal clip you filmed yourself — and apply that exact movement to entirely new content.
No other AI video API offers this level of camera control. Sora 2 and Kling 3.0 rely on text prompts for camera direction. Veo 3.1 supports basic camera keywords. Only Seedance 2.0 accepts reference video input specifically for camera language extraction.
How Seedance 2.0 Reads Camera Language
The @Video tag is the mechanism. When you upload a reference video and tag it in your prompt, you specify what the model should extract from it. This is critical — a single reference video contains camera movement, subject motion, visual effects, lighting, and pacing. You need to tell the model which element to use.
The @Video Tag Syntax
@Video1 — reference camera movement and tracking trajectory
The prompt explicitly states what to reference. Compare these two approaches:
Vague (unreliable):
Use @Video1 as reference. Generate a city scene.
Specific (reliable):
Replicate @Video1's camera movement exactly — the tracking speed,
trajectory, and push-in timing. Apply this camera work to a new scene:
a samurai walking through a bamboo forest at dawn.
The second version tells the model: extract only the camera language from the reference. Generate new content (samurai, bamboo forest) but move the virtual camera exactly as the reference video's camera moves.
What You Can Extract
| Reference Element | Prompt Language | Example |
|---|---|---|
| Camera path/trajectory | "replicate camera movement" | Tracking, dolly, orbit, crane |
| Camera speed | "match camera pacing" | Slow creep, fast whip pan |
| Camera + subject motion | "replicate camera and choreography" | Dance + camera combo |
| Only subject motion | "replicate movement/action from @Video1" | Character walking pattern |
| Visual effects | "replicate transition effects" | Whip pan transitions, morphs |
Key rule: Be explicit about what you're referencing. If you want only the camera movement, say "camera movement." If you also want the action choreography, say both. Ambiguity leads to mixed results.
For a complete guide to the @ tag reference system, see our Multimodal Reference: The Ultimate Guide to @Tags.
Setting Up Your Environment
Requirements
- Python 3.8+
requestslibrary- EvoLink API key (sign up free)
- Reference video files (MP4, 2–15 seconds, under 50MB, 480p–720p)
Install Dependencies
pip install requests
Base API Configuration
import requests
import time
EVOLINK_API_KEY = "your-evolink-api-key"
BASE_URL = "https://api.evolink.ai/v1"
HEADERS = {
"Authorization": f"Bearer {EVOLINK_API_KEY}",
"Content-Type": "application/json"
}
def poll_task(task_id, interval=5, timeout=300):
elapsed = 0
while elapsed < timeout:
resp = requests.get(
f"{BASE_URL}/tasks/{task_id}",
headers=HEADERS
)
result = resp.json()
status = result.get("status")
if status == "completed":
print(f"Video ready: {result['results'][0]}")
return result
elif status == "failed":
print(f"Generation failed: {result.get('error')}")
return result
print(f"Status: {status} ({elapsed}s elapsed)")
time.sleep(interval)
elapsed += interval
print("Timeout reached")
return None
This base code handles authentication and task polling. Every case below builds on it.
Get your free EvoLink API key at evolink.ai to follow along with the examples below.
Case 1: One-Take Tracking Shot
The one-take tracking shot is one of the most impressive camera techniques in filmmaking. The camera follows a subject through multiple environments in a single continuous take — no cuts. Think of the famous Copacabana scene in Goodfellas or the hallway fight in Oldboy.
With Seedance 2.0, you replicate this by uploading a reference video that demonstrates a continuous tracking movement, then generating new content that follows the same camera path.
What You Need
Reference video: Any clip showing a continuous tracking camera movement (2–15 seconds). A gimbal walking shot, a drone following shot, or a Steadicam clip works well.
Prompt: Describes the new content to generate, while referencing the camera movement from your video.
The Prompt
Replicate @Video1's camera movement exactly — continuous one-take
tracking shot, maintaining the same speed, trajectory, and smooth
forward motion throughout.
Apply this camera work to a new scene: a parkour runner sprinting
through narrow city alleyways, leaping over obstacles, vaulting up
a staircase, and reaching a rooftop overlooking the city skyline.
Golden sunset lighting. Dynamic and energetic. No cuts.
Key elements of this prompt:
- Line 1–3: Explicitly tells the model to extract camera movement from
@Video1 - Line 5–8: Describes entirely new content — the model generates this subject matter
- "No cuts": Reinforces the one-take requirement
Complete Python Code
# Case 1: One-Take Tracking Shot
response = requests.post(
f"{BASE_URL}/videos/generations",
headers=HEADERS,
json={
"model": "seedance-2.0",
"prompt": (
"Replicate @Video1's camera movement exactly — continuous "
"one-take tracking shot, maintaining the same speed, "
"trajectory, and smooth forward motion throughout.\n\n"
"Apply this camera work to a new scene: a parkour runner "
"sprinting through narrow city alleyways, leaping over "
"obstacles, vaulting up a staircase, and reaching a rooftop "
"overlooking the city skyline. Golden sunset lighting. "
"Dynamic and energetic. No cuts."
),
"video_urls": ["https://your-cdn.com/tracking_reference.mp4"],
"duration": 10,
"quality": "720p"
}
)
task_id = response.json()["id"]
print(f"Task created: {task_id}")
result = poll_task(task_id)
What to Expect
The generated video will show a parkour runner in a city environment — but the camera movement (tracking speed, forward momentum, smooth continuous motion) comes from your reference video. The model doesn't copy the subject or scenery from the reference. It copies how the camera moves.
Example output: One-take tracking shot following a parkour runner through urban environments. Camera maintains continuous forward motion with smooth gimbal-like stability.
Try it yourself: Swap in your own reference video — a drone following shot, a car dashcam clip, or a walking gimbal video — and change the prompt to match your desired scene. The camera movement transfers.
Case 2: Hitchcock Zoom (Dolly Zoom)
The dolly zoom — invented for Alfred Hitchcock's Vertigo (1958) — is one of cinema's most disorienting and powerful camera techniques. The camera physically moves toward (or away from) the subject while the lens zooms in the opposite direction. The subject stays the same size in frame, but the background warps dramatically. It creates a visceral sense of unease, realization, or emotional shift.
In real filmmaking, this requires a dolly track and precise zoom timing. With Seedance 2.0, you need a reference clip.
What You Need
Reference video: A clip demonstrating a dolly zoom effect. You can find examples on YouTube by searching "dolly zoom effect" or "vertigo effect tutorial." The clip should be 3–8 seconds showing the background compression/expansion while the subject stays stationary.
Prompt: New subject matter with explicit dolly zoom reference.
The Prompt
Replicate @Video1's camera technique exactly — the dolly zoom
(Hitchcock zoom) effect where the camera moves forward while
zooming out, keeping the subject the same size while the
background dramatically stretches.
Apply this effect to: a detective standing in a dim corridor.
As the dolly zoom activates, the corridor behind him stretches
impossibly long, creating a sense of dawning horror.
Dramatic side lighting with deep shadows. Film noir atmosphere.
Complete Python Code
# Case 2: Hitchcock Zoom (Dolly Zoom)
response = requests.post(
f"{BASE_URL}/videos/generations",
headers=HEADERS,
json={
"model": "seedance-2.0",
"prompt": (
"Replicate @Video1's camera technique exactly — the dolly "
"zoom (Hitchcock zoom) effect where the camera moves forward "
"while zooming out, keeping the subject the same size while "
"the background dramatically stretches.\n\n"
"Apply this effect to: a detective standing in a dim "
"corridor. As the dolly zoom activates, the corridor behind "
"him stretches impossibly long, creating a sense of dawning "
"horror. Dramatic side lighting with deep shadows. "
"Film noir atmosphere."
),
"video_urls": ["https://your-cdn.com/dolly_zoom_reference.mp4"],
"duration": 8,
"quality": "720p"
}
)
task_id = response.json()["id"]
print(f"Task created: {task_id}")
result = poll_task(task_id)
Why This Works
The dolly zoom is notoriously hard to describe in text. Prompting "zoom in while moving backward" often produces confused results in other AI video tools. By providing a reference video that demonstrates the technique, Seedance 2.0 can analyze the spatial relationship changes — how the background compresses/expands relative to the foreground — and reproduce them precisely.
Tip: The cleaner and more isolated the dolly zoom in your reference video, the better. Avoid reference clips with lots of subject movement or scene changes — the model might confuse camera motion with subject motion.
Case 3: Orbital Camera (360° Rotation)
The orbital shot rotates the camera around a subject, creating a dramatic reveal or establishing a character's presence. It's a staple of music videos, hero introductions, and product showcases.
What You Need
Reference video: A clip showing a camera orbiting around a subject. A smooth 180° or 360° rotation works best. Turntable product shots or character reveal shots are ideal references.
Prompt: New subject + explicit orbital reference.
The Prompt
Replicate @Video1's orbital camera movement — the smooth 360°
rotation around the subject, maintaining consistent distance
and speed throughout the arc.
Apply this camera movement to: a lone astronaut standing on
the surface of Mars. Red desert landscape stretches to the
horizon. The orbital camera reveals the astronaut from all
angles as dust particles float in the thin atmosphere.
Epic cinematic scale. Golden hour Martian lighting.
Complete Python Code
# Case 3: Orbital Camera (360 Rotation)
response = requests.post(
f"{BASE_URL}/videos/generations",
headers=HEADERS,
json={
"model": "seedance-2.0",
"prompt": (
"Replicate @Video1's orbital camera movement — the smooth "
"360 degree rotation around the subject, maintaining consistent "
"distance and speed throughout the arc.\n\n"
"Apply this camera movement to: a lone astronaut standing "
"on the surface of Mars. Red desert landscape stretches to "
"the horizon. The orbital camera reveals the astronaut from "
"all angles as dust particles float in the thin atmosphere. "
"Epic cinematic scale. Golden hour Martian lighting."
),
"video_urls": ["https://your-cdn.com/orbital_reference.mp4"],
"duration": 10,
"quality": "720p"
}
)
task_id = response.json()["id"]
print(f"Task created: {task_id}")
result = poll_task(task_id)
Choosing the Right Orbital Reference
Not all orbital shots are equal. The reference video determines:
| Reference Quality | Result Quality |
|---|---|
| Smooth, steady rotation at constant speed | Clean, professional orbital |
| Handheld wobbly rotation | Organic, documentary-style orbit |
| Fast whip-around | Dynamic, high-energy reveal |
| Slow 90° partial orbit | Subtle, dramatic angle shift |
Pick a reference that matches the energy you want. A turntable product video gives you machine-smooth rotation. A handheld walk-around gives you organic movement.
Advanced: Combining Camera Movement with Other References
The real power of Seedance 2.0's reference system emerges when you combine camera movement with other input types. You're not limited to a single reference — you can use up to 3 video references and 9 image references (12 files total).
Camera + Character + Style
Here's a three-input combination:
@Video1— camera movement (orbital shot)@Image1— character appearance (a specific character design)@Image2— style reference (a particular art style or color palette)
# Advanced: Camera + Character + Style combination
response = requests.post(
f"{BASE_URL}/videos/generations",
headers=HEADERS,
json={
"model": "seedance-2.0",
"prompt": (
"Replicate @Video1's orbital camera movement — smooth "
"rotation around the subject.\n\n"
"@Image1 is the character — maintain this character's "
"appearance exactly.\n\n"
"@Image2 is the visual style reference — match its color "
"palette, lighting mood, and artistic treatment.\n\n"
"Scene: The character from @Image1 stands in the center "
"of a grand cathedral. The orbital camera from @Video1 "
"slowly reveals the architecture. Visual style matches "
"@Image2 throughout."
),
"image_urls": [
"https://your-cdn.com/character_design.png",
"https://your-cdn.com/art_style_reference.jpg"
],
"video_urls": [
"https://your-cdn.com/orbital_reference.mp4"
],
"duration": 10,
"quality": "720p"
}
)
task_id = response.json()["id"]
print(f"Task created: {task_id}")
result = poll_task(task_id)
Reference Allocation Strategy
When mixing multiple reference types, be strategic about your 12-file budget:
| Scenario | Video Refs | Image Refs | Audio Refs |
|---|---|---|---|
| Camera replication only | 1 (camera) | 0 | 0 |
| Camera + character | 1 (camera) | 1 (character) | 0 |
| Camera + character + style | 1 (camera) | 2 (character + style) | 0 |
| Camera + choreography + character | 2 (camera + dance) | 1 (character) | 0 |
| Full production | 1 (camera) | 3 (character + scene + style) | 1 (music) |
Rule of thumb: Start with 2–3 references. Adding more doesn't always improve results — it can introduce conflicting signals. Use the minimum number of references needed to communicate your intent.
For more on multi-reference strategies, see our Multimodal Reference: The Ultimate Guide to @Tags.
Common Mistakes and How to Fix Them
Mistake 1: Not Specifying What to Reference
Bad:
Use @Video1. A knight rides a horse through a valley.
The model doesn't know if you want the camera movement, the subject motion, the visual style, or everything from @Video1.
Good:
Replicate @Video1's camera movement and tracking trajectory.
A knight rides a horse through a green valley at sunrise.
Mistake 2: Reference Video Too Long or Complex
Reference videos should be 2–15 seconds and show a clean, identifiable camera movement. A 15-second clip with three different camera techniques (pan, then zoom, then orbit) gives confusing signals.
Fix: Trim your reference to isolate the specific camera movement you want. Use the simplest, cleanest example of the technique.
Mistake 3: Confusing Camera Movement with Subject Movement
A reference video of someone dancing contains two things: how the camera moves and how the subject moves. If you only want the camera work, say so explicitly:
Replicate ONLY @Video1's camera movement — the pan speed, tracking
trajectory, and framing rhythm. Ignore the subject's actions.
New subject: a robot assembling car parts on a factory floor.
Mistake 4: Conflicting Prompt and Reference
If your reference shows a slow, smooth dolly push-in but your prompt says "fast-paced action with rapid cuts," the model receives contradictory signals.
Fix: Align your text prompt with your reference video's energy. The prompt describes content; the reference demonstrates technique.
Mistake 5: Expecting Perfect First Results
Camera replication is sophisticated. Your first attempt may not perfectly match the reference. Iterate:
- Start with a simple prompt + clean reference
- Review the output — is the camera movement close?
- Adjust the prompt language to be more specific about what's off
- Try a different reference video if the technique isn't transferring
FAQ
Can I combine camera movement from one video with choreography from another?
Yes. Use two @Video references: @Video1 for camera movement and @Video2 for choreography/action. Specify in the prompt: "Replicate @Video1's camera movement and @Video2's dance choreography." Seedance 2.0 supports up to 3 video references simultaneously.
What video format and length works best for camera references?
MP4 format, 480p–720p resolution, 2–15 seconds duration, under 50MB file size. For camera movement references, shorter is often better — a clean 3–5 second clip of a single camera technique transfers more reliably than a long clip with multiple techniques.
How is this different from Sora or Kling camera control?
Sora 2 and Kling 3.0 use text-based camera direction — you describe the movement in words ("dolly in," "pan left"). Results depend on how well the model interprets your text. Seedance 2.0 uses reference-based camera control — you show the model what you want via @Video tags. This produces more precise and consistent camera replication, especially for complex movements like Hitchcock zooms or one-take tracking shots that are difficult to describe in text.
Can I use a screen recording or phone video as a camera reference?
Yes. Any video that demonstrates the camera movement you want works as a reference. A phone video you shot while walking produces a handheld tracking shot. A screen recording of a film clip transfers that film's camera language. The model extracts the camera behavior regardless of production quality.
Does camera replication work with image-to-video generation?
Yes. You can combine @Video1 (camera movement reference) with @Image1 (first frame / character) to generate a video that starts from your image and moves the camera according to your video reference. This is powerful for product videos and character showcases.
Start Replicating Any Camera Movement
Camera movement is no longer limited by equipment or expertise. With a reference video and Seedance 2.0's @Video tag system, you can replicate any camera technique — from subtle rack-focus pulls to full Hitchcock zooms — and apply it to any content you can describe.
The three cases in this tutorial cover the most requested camera techniques:
- One-take tracking for immersive, continuous movement
- Dolly zoom for dramatic tension and psychological impact
- Orbital shot for character reveals and product showcases
Each works the same way: upload a reference, tag it, describe your scene, let the model handle the camera.
Ready to try it? Sign up for a free EvoLink API key and start generating cinematic AI videos with precise camera control.
Related reading:
- Seedance 2.0 Prompts: Complete Guide to Multimodal Video Generation — master the shot-script prompting format for complex scenes
- Seedance 2.0 Multimodal Reference: The Ultimate Guide to @Tags — deep dive into the complete @tag reference system
Last updated: February 20, 2026 | Written by J, Growth Lead at EvoLink