UpGuard, a leader in cybersecurity and risk management, has released new research exposing significant security vulnerabilities in developer workflows involving AI code agents. By analyzing more than 18,000 publicly available AI agent configuration files from GitHub repositories, the study revealed that one in five developers are granting these agents unrestricted, high-risk permissions without human oversight or approval mechanisms.
Quick Intel
The Hidden Dangers of Over-Permissive AI Agents
Developers increasingly rely on AI tools to accelerate coding tasks, but many configurations grant broad permissions—including web downloads, file read/write/delete operations, and arbitrary code execution—without requiring confirmation or limiting scope. This "vibe coding" approach prioritizes speed over security, turning helpful assistants into potential persistent threats when compromised through prompt injection or malicious instructions.
Critical Risks Exposed
Unrestricted file deletion permissions pose immediate destructive potential. Automated commits to main branches eliminate review gates, allowing attackers to inject harmful code directly into critical repositories. High-risk execution permissions in popular runtimes like Python and Node.js provide a direct path to full environment takeover. Meanwhile, typosquatting in MCP registries creates fertile ground for impersonation attacks, where developers unknowingly install malicious tools mimicking trusted vendors.
"Security teams lack visibility into what AI agents are touching, exposing, or leaking when developers grant vibe coding tools broad access without oversight," said Greg Pollock, director of Research and Insights at UpGuard. "Despite the best intentions, developers are increasing the potential for security vulnerabilities and exploitation. This is how small workflow shortcuts can escalate into major supply chain and credential exposure problems."
Addressing the Governance Gap
UpGuard's Breach Risk solution helps organizations detect these hidden risks by surfacing misconfigurations, overly broad permissions, and early threat signals—including dark web mentions—into actionable intelligence. By providing deep visibility into AI-generated changes, access patterns, and data flows, security teams can enforce stricter governance and reduce exposure in AI-assisted development environments.
The full research reports are available:
This analysis highlights a pressing need for better governance in AI-augmented developer workflows within the cybersecurity and DevSecOps space, where rapid adoption of agentic tools must be matched by proportional security controls to prevent emerging supply chain and insider risks.
About UpGuard's Breach Risk
UpGuard's Breach Risk solution is designed to turn hidden shortcuts such as misconfigurations or overly broad permissions to early threat signals like dark web chatter, into clear, actionable visibility. By providing deep insight into the AI-generated changes, access, and data flow, UpGuard's Breach Risk solution helps security teams enforce a strict governance framework.
About UpGuard
Founded in 2012, UpGuard is a leader in cybersecurity and risk management. The company's AI-powered platform for cyber risk posture management (CRPM), provides a centralized, actionable view of cyber risk across an organization's vendors, attack surface, and workforce. Trusted by thousands of companies, UpGuard's platform is designed to help security teams manage cyber risk with confidence and efficiency.