Krux
April 25, 2026
GPT-5-Codex Runs Autonomously for 7+ Hours on Complex Tasks
Published: April 25, 2026 at 12:28 AM
Updated: April 25, 2026 at 12:28 AM
100-word summary
OpenAI upgraded Codex with GPT-5, letting it work independently for over seven hours on large coding tasks. The standout efficiency gain: container caching cut median completion time by 90%. For simple requests, the model uses 94% fewer tokens than GPT-5; for complex ones, it spends twice as long reasoning and iterating. The system now works across terminal, IDE, web, GitHub, and ChatGPT, handling code reviews, documentation, and dashboards. OpenAI says early users see 30-50% faster iteration cycles. The catch: it's an assistant, not autopilot. Default mode runs in a sandbox with network access disabled.
What happened
OpenAI upgraded Codex with GPT-5, letting it work independently for over seven hours on large coding tasks. The standout efficiency gain: container caching cut median completion time by 90%. For simple requests, the model uses 94% fewer tokens than GPT-5; for complex ones, it spends twice as long reasoning and iterating. The system now works across terminal, IDE, web, GitHub, and ChatGPT, handling code reviews, documentation, and dashboards. OpenAI says early users see 30-50% faster iteration cycles. The catch: it's an assistant, not autopilot.
Why it matters
Default mode runs in a sandbox with network access disabled.