Krux

March 25, 2026
OpenAI Lets You Deepfake Yourself With Permission Controls
Published: March 25, 2026 at 12:35 AM
Updated: March 25, 2026 at 12:35 AM
100-word summary
OpenAI just published its safety playbook for Sora, and the most interesting part isn't what it blocks. It's how it handles consent. Users can upload photos of real people to generate videos, but only after attesting they have permission. The company then tracks those likenesses as "Characters" that only the original user can grant or revoke access to. Every video gets watermarked with visible and invisible provenance signals, plus C2PA metadata. OpenAI keeps internal reverse-search tools to trace any Sora output back to its origin. Videos of kids get extra scrutiny and mandatory watermarks. The system scans prompts, video frames, and audio transcripts to catch policy violations before they go...
What happened
OpenAI just published its safety playbook for Sora, and the most interesting part isn't what it blocks. It's how it handles consent. Users can upload photos of real people to generate videos, but only after attesting they have permission. The company then tracks those likenesses as "Characters" that only the original user can grant or revoke access to. Every video gets watermarked with visible and invisible provenance signals, plus C2PA metadata. OpenAI keeps internal reverse-search tools to trace any Sora output back to its origin. Videos of kids get extra scrutiny and mandatory watermarks.
Why it matters
The system scans prompts, video frames, and audio transcripts to catch policy violations before they go live.