Descript's AI Dubbing Exports Jump 15% in 30 Days

March 9, 2026

Descript's AI Dubbing Exports Jump 15% in 30 Days

Published: March 9, 2026 at 12:30 AM

Updated: March 9, 2026 at 12:30 AM

100-word summary

Descript used OpenAI's GPT-5 models to automatically translate and dub video libraries, and exports of dubbed videos jumped 15% in the first month. The trick? Breaking transcripts into chunks, counting syllables, then prompting the model to hit exact durations while preserving meaning. Natural pacing improved dramatically: 73-83% of dubbed segments now fall within the acceptable rhythm window, up from 40-60% before. Duration adherence climbed up to 43 percentage points for some languages. The system maintained 85.5% semantic accuracy, meaning most translations still sound like what the speaker actually said. Studios sitting on thousands of untranslated videos can now dub them without armies of retimers.

What happened

Descript used OpenAI's GPT-5 models to automatically translate and dub video libraries, and exports of dubbed videos jumped 15% in the first month. The trick? Breaking transcripts into chunks, counting syllables, then prompting the model to hit exact durations while preserving meaning. Natural pacing improved dramatically: 73-83% of dubbed segments now fall within the acceptable rhythm window, up from 40-60% before. Duration adherence climbed up to 43 percentage points for some languages. The system maintained 85.5% semantic accuracy, meaning most translations still sound like what the speaker actually said.

Why it matters

Studios sitting on thousands of untranslated videos can now dub them without armies of retimers.

Sources