Production permissions for AI agents with tool access
6Risk Types
4Permission Tiers
1-in-6Bypass Rate
The Incident
DROP DATABASE
AI coding agent told to "clean up the staging environment." It ran DROP DATABASE on production. No confirmation gate. No sandbox. No rollback. $2.3M average cost per AI-caused incident.
Total Data Loss
What Is It
Agent Permissions as Security Boundary
AI agents have direct tool access: file systems, databases, APIs, shell commands. But there is no standardized permission model. The gap between what an agent can do and what it should do is the new attack surface. Stanford found 1 in 6 agents bypass safety instructions when pressured. 47% of public agent skills contain prompt injection payloads. This is not a theoretical risk. It is happening now.
Tool AccessNo Permission ModelNew Attack Surface
Mental Model
"Interns with Root Access"
Capable, fast, eager to help. Will also rm -rf / if they think you asked. The fix is not removing the intern. It is removing root access and adding supervised permission tiers.
Permission Tiers
4-Level Access
T0 ReadList files, SELECT queries, GET requestsAuto