← Back to Guide

The Rise of "Video Prompts": How to Master Sora and Runway Gen-3

DG
Dhananjoy Ghosh 8 min read

We are witnessing a shift. For the past two years, you've been a photographer—describing static scenes to Midjourney. Now, with OpenAI's Sora and Runway Gen-3, you must become a film director.

The rules have changed. Static prompts ("a beautiful beach") result in boring, still videos. To create cinema, you need to describe time, motion, and camera work.


Chapter 1: The "Director's Vocabulary"

AI video models are trained on movie data, so they speak the language of cinema. If you don't use these terms, you're leaving 80% of the model's power on the table.

Pan Camera rotates horizontally (left/right) from a fixed point.
"Camera pans right to reveal the ocean."
Tilt Camera rotates vertically (up/down) from a fixed point.
"Tilt up from the shoes to the face."
Dolly / Truck The entire camera physically moves through space.
"Dolly in towards the door."
FPV (First Person View) Dynamic, fast-moving drone shot.
"FPV drone shot flying through a narrow canyon."
My Take: Beginners confuse "Zoom" and "Dolly." Zoom changes the lens (making the background flat). Dolly moves the camera (keeping depth). Alot of AI models prefer "Dolly" for 3D realism.

Chapter 2: The Video Prompt Formula

Forget the Midjourney formula. Video needs a new structure that prioritizes action and flow.

[Subject] + [Action/Motion] + [Environment] + [Camera Move] + [Style]

Example in Action:

Weak Prompt: "A tiger running."

Director's Prompt: "A cybernetic tiger [Subject] sprinting aggressively [Action] through a neon-lit Tokyo alleyway [Environment]. Low angle tracking shot, camera dollying alongside the tiger [Camera], cinematic 4k, motion blur [Style]."

Chapter 3: Mastering Physics & Transitions

One of the coolest things AI video can do is "morphing"—changing one object into another seamlessly.

The Morph Prompt

"A close up of a blooming rose that seamlessly transforms into a burning galaxy, smooth transition, match cut."

Simulating Physics

To get realistic water, cloth, or hair, you must describe the *force* acting on it.

  • Instead of "a woman standing," try "a woman standing in a gale force wind, hair whipping violently across her face."
  • Instead of "a river," try "a turbulent river crashing against rocks with white foam spray."

Chapter 4: Sora vs. Runway Gen-3

Which tool should you use? It depends on your goal.

OpenAI Sora

  • Best For: Long coherence (up to 60s), complex physical interactions, and high fidelity.
  • Weakness: Limited control (currently), no "motion brush" equivalent yet publicly available.
  • Verdict: The "Blockbuster" choice.

Runway Gen-3

  • Best For: Control. The "Motion Brush" lets you paint exactly where you want movement. Excellent style presets.
  • Weakness: Generally shorter clips (5-10s) before losing coherence.
  • Verdict: The "Editor's" choice.

Chapter 5: 10 Copy-Paste Video Prompts

Try these prompts to see what the models are capable of.

  • 1. The "Bullet Time" Shot:
    "Time freezes as a coffee cup falls off a table, liquid suspended in mid-air, camera orbits 360 degrees around the spill, extreme detail."
  • 2. The Drone Landscape:
    "Cinematic FPV drone shot flying fast just above a river in a dense pine forest, mist rising from the water, morning sunlight breaking through trees."
  • 3. The Macro Nature Doc:
    "Extreme macro close-up of an ant carrying a leaf, camera tracking alongside the ant, depth of field 100mm, National Geographic style."
  • 4. The Cyberpunk City:
    "Futuristic city with flying cars, camera tilts up from street level to the top of a skyscraper, neon lights reflecting in rain puddles, Blade Runner aesthetic."
  • 5. Historical Re-enactment:
    "1920s New York City street scene, black and white film grain, old cars driving by, camera static tripod shot, vintage film look."

Frequently Asked Questions

Are these tools free? +

Runway offers a limited free trial/credits, then paid tiers. Sora's public pricing has not been finalized but is expected to be a premium feature for ChatGPT Plus or Enterprise users.

Can I edit the video after generation? +

Not traditionally. You usually have to re-generate or use "inpainting" features if the model supports it. It's best to get the prompt right first.

Lights, Camera, Action

Video is the next frontier of AI. By mastering the language of cinema—pans, tilts, and dolly shots—you put yourself ahead of 99% of prompters who are still writing "cool robot video."

Grab a camera (or a keyboard) and start directing.