AWS AI Starter Kit Gives Hackers "God Mode" Access

April 10, 2026

AWS AI Starter Kit Gives Hackers "God Mode" Access

Published: April 10, 2026 at 12:42 AM

Updated: April 10, 2026 at 12:42 AM

100-word summary

Palo Alto Networks found that Amazon's Bedrock AgentCore starter toolkit creates default permissions letting one compromised AI agent read every other agent's memory across your entire AWS account. The templated setup also grants wildcard access to code interpreters and container images, turning a single breach into full-fleet exposure. AWS now warns the defaults are for testing only, not production. Yet there's no public count of how many teams unknowingly shipped these permissions live. The toolkit shipped with a how-to that made insecure-by-default the path of least resistance, and someone had to get hacked before the docs caught up.

What happened

Palo Alto Networks found that Amazon's Bedrock AgentCore starter toolkit creates default permissions letting one compromised AI agent read every other agent's memory across your entire AWS account. The templated setup also grants wildcard access to code interpreters and container images, turning a single breach into full-fleet exposure. AWS now warns the defaults are for testing only, not production. Yet there's no public count of how many teams unknowingly shipped these permissions live.

Why it matters

The toolkit shipped with a how-to that made insecure-by-default the path of least resistance, and someone had to get hacked before the docs caught up.

Sources