Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • AI

UpGuard: 20% of Devs Give AI Agents Unrestricted File Access


UpGuard: 20% of Devs Give AI Agents Unrestricted File Access
  • by: Source Logo
  • |
  • February 5, 2026

UpGuard, a leader in cybersecurity and risk management, has released new research exposing significant security vulnerabilities in developer workflows involving AI code agents. By analyzing more than 18,000 publicly available AI agent configuration files from GitHub repositories, the study revealed that one in five developers are granting these agents unrestricted, high-risk permissions without human oversight or approval mechanisms.

Quick Intel

  • 20% of analyzed configurations allow AI agents unrestricted file deletion capabilities, enabling a single prompt injection or error to potentially wipe entire projects or systems.
  • Nearly 20% of developers permit AI agents to automatically save and commit changes directly to the main repository branch, bypassing human code review and opening pathways for malicious code insertion into production or open-source projects.
  • 14.5% of configs grant arbitrary Python code execution permissions, and 14.4% allow the same for Node.js, effectively handing attackers full control over the developer's environment upon successful exploitation.
  • Extensive typosquatting in the Model Context Protocol (MCP) ecosystem was identified, with up to 15 untrusted lookalike servers for every legitimate vendor-provided one in public registries.
  • These overly permissive setups create governance blind spots, slowing incident response and increasing risks of credential theft, data leakage, and supply chain compromise.
  • UpGuard emphasizes that while AI coding assistants improve efficiency, the lack of visibility into what agents access, modify, or expose represents a growing insider-threat vector.

The Hidden Dangers of Over-Permissive AI Agents

Developers increasingly rely on AI tools to accelerate coding tasks, but many configurations grant broad permissions—including web downloads, file read/write/delete operations, and arbitrary code execution—without requiring confirmation or limiting scope. This "vibe coding" approach prioritizes speed over security, turning helpful assistants into potential persistent threats when compromised through prompt injection or malicious instructions.

Critical Risks Exposed

Unrestricted file deletion permissions pose immediate destructive potential. Automated commits to main branches eliminate review gates, allowing attackers to inject harmful code directly into critical repositories. High-risk execution permissions in popular runtimes like Python and Node.js provide a direct path to full environment takeover. Meanwhile, typosquatting in MCP registries creates fertile ground for impersonation attacks, where developers unknowingly install malicious tools mimicking trusted vendors.

"Security teams lack visibility into what AI agents are touching, exposing, or leaking when developers grant vibe coding tools broad access without oversight," said Greg Pollock, director of Research and Insights at UpGuard. "Despite the best intentions, developers are increasing the potential for security vulnerabilities and exploitation. This is how small workflow shortcuts can escalate into major supply chain and credential exposure problems."

Addressing the Governance Gap

UpGuard's Breach Risk solution helps organizations detect these hidden risks by surfacing misconfigurations, overly broad permissions, and early threat signals—including dark web mentions—into actionable intelligence. By providing deep visibility into AI-generated changes, access patterns, and data flows, security teams can enforce stricter governance and reduce exposure in AI-assisted development environments.

The full research reports are available:

This analysis highlights a pressing need for better governance in AI-augmented developer workflows within the cybersecurity and DevSecOps space, where rapid adoption of agentic tools must be matched by proportional security controls to prevent emerging supply chain and insider risks.

 

About UpGuard's Breach Risk

UpGuard's Breach Risk solution is designed to turn hidden shortcuts such as misconfigurations or overly broad permissions to early threat signals like dark web chatter, into clear, actionable visibility. By providing deep insight into the AI-generated changes, access, and data flow, UpGuard's Breach Risk solution helps security teams enforce a strict governance framework.

 

About UpGuard

Founded in 2012, UpGuard is a leader in cybersecurity and risk management. The company's AI-powered platform for cyber risk posture management (CRPM), provides a centralized, actionable view of cyber risk across an organization's vendors, attack surface, and workforce. Trusted by thousands of companies, UpGuard's platform is designed to help security teams manage cyber risk with confidence and efficiency.

  • AI SecurityCybersecurity
News Disclaimer
  • Share