OpenAI Now Calls Law Enforcement on ChatGPT Users

May 1, 2026

OpenAI Now Calls Law Enforcement on ChatGPT Users

Published: May 1, 2026 at 12:15 AM

Updated: May 1, 2026 at 12:15 AM

100-word summary

OpenAI disclosed that it escalates ChatGPT conversations to law enforcement when its systems detect imminent risk. The company pairs automated flags with human reviewers who assess context before taking action, from surfacing crisis hotlines to revoking access entirely. Its zero-tolerance violence policy means some users lose their accounts permanently. Parents can now receive triple alerts (email, SMS, push) when teens trigger concerns. But OpenAI published no data on how often it acts, how many conversations get flagged, or whether the system catches real threats more often than it panics over creative writing. The transparency report is still hypothetical.

What happened

OpenAI disclosed that it escalates ChatGPT conversations to law enforcement when its systems detect imminent risk. The company pairs automated flags with human reviewers who assess context before taking action, from surfacing crisis hotlines to revoking access entirely. Its zero-tolerance violence policy means some users lose their accounts permanently.

Why it matters

Parents can now receive triple alerts (email, SMS, push) when teens trigger concerns. But OpenAI published no data on how often it acts, how many conversations get flagged, or whether the system catches real threats more often than it panics over creative writing. The transparency report is still hypothetical.

Sources