Runway is one of the true pioneers of AI video. They shipped Gen-1 in early 2023 — one of the first practical text-to-video models that people actually used. Gen-2 went viral on social media later that year. Gen-3 Alpha introduced reference images for guided generation. And now Gen-4 brings the feature that filmmakers have been asking for since day one: reliable character consistency across multiple shots.
Character consistency is THE breakthrough.
This is what separates Gen-4 from every other model. Upload a reference image of a person, and Gen-4 maintains their exact appearance — face, clothing, body proportions — across completely different shots and scenes. No other model does this as reliably. For brand content, short films, music videos, and any project where the same character appears in multiple clips, this is game-changing. You're no longer limited to single isolated shots — you can build actual narratives.
Gen-4 Turbo for speed.
The Turbo variant cuts generation time significantly while maintaining visual quality. For iterative workflows where you need to test multiple angles or compositions, Turbo lets you move fast without switching to a lower-quality model.
Generative Visual Effects (GVFX) — a new production paradigm.
Traditional VFX requires extensive modeling, rendering, and post-production. Gen-4 introduced GVFX: users provide a visual reference or text description — a character's action, a scene's atmosphere, a specific effect — and Gen-4 generates high-quality visual effects in minutes instead of weeks. To produce their latest demo reel, a Runway team member generated hundreds of individual clips in a few hours, then edited them into a coherent sequence. CEO Cristóbal Valenzuela Barrera told Bloomberg the entire process took days, not months.
Hollywood is already on board.
Runway signed a landmark deal with Lionsgate (the studio behind *The Hunger Games*) — the first major film studio to directly partner with an AI video model provider. Runway is using Lionsgate's 20,000+ title library to build a custom AI production model for storyboarding, background generation, and VFX. They've already created scenes for the TV series *House of David* and produced ads for Puma. As Valenzuela puts it: "AI is infrastructure like electricity. Every company will use AI. We're not an AI company — we're a media and entertainment company."
Cinematic camera controls that actually work.
Describe a dolly-in, a slow pan, a crane shot, or a tracking shot — and Gen-4 delivers. The team was founded by artists and filmmakers, and that DNA shows in how naturally the model understands camera language.
The trade-offs are real.
Gen-4 outputs at 720p — not 1080p. For social media, 720p is fine. For large screens or broadcast, it's a constraint. Duration caps at 10 seconds per clip. And there's no audio generation — you'll add sound in post.
The best value on the platform.
At 2 credits per second ($0.10 per 5-second clip), Gen-4 is the cheapest video model we offer. $10 buys 100 videos. Compare to Runway's own subscription ($12–76/month with limited generations) and pay-per-video wins decisively.