Terra Security, a pioneer in agentic Continuous Threat Exposure Management (CTEM), today disclosed findings from months of real-world adversarial testing across AI-powered applications, copilots, chat interfaces, and AI-generated code workflows. The research identified recurring exploitable vulnerability patterns unique to AI systems—differing from traditional software flaws—including prompt injection attacks, system prompt leakage, cross-tenant data exposure, privilege escalation via tool execution chains, reverse shell execution, broken authorization in AI-generated processes, and cross-site scripting with authentication bypass. In 100% of tested applications embedding AI chats or copilots, AI-related security vulnerabilities were discovered.
Testing across applications built with AI coding tools like Claude Code, rapid AI app-generation platforms, and enterprise software with embedded AI copilots revealed patterns distinct from conventional software security flaws. Issues such as CVE-2026-25724 (discovered in Anthropic’s Claude Code) highlight code-level risks, but Terra’s research shows that exploitability often stems from complex interactions in deployed applications—where AI agents interact with permissions, pipelines, and business logic in unintended ways.
The new module enables ongoing, agentic simulation of attacks tailored to AI systems. It provides real-time visibility into vulnerabilities introduced during rapid AI development cycles, helping security teams shift from reactive remediation to proactive exposure management. This aligns with the accelerating pace of AI adoption, where small validation gaps can quickly scale across environments.
“Traditional scanners look for known patterns,” said Gal Malachi, CTO and Co-Founder of Terra Security. “What we’re seeing with AI-powered systems is contextual vulnerabilities in cases where the model behaves as designed, but the surrounding application or permission model allows unintended outcomes. A prompt injection may not resemble a conventional code flaw, yet it can still expose sensitive data or trigger unsafe actions if safeguards are incomplete.”
In response, Terra Security has released a new continuous AI penetration testing module within its platform. This capability enables security researchers to simulate attacks on AI systems at scale, matching the velocity of AI development and use cases while providing ongoing visibility into real-world exploitability in production environments.
“Some of these issues did not stem from malicious intent or overt misconfiguration, but from complex interactions between AI agents, application logic, and operational tooling,” said Shahar Peled, CEO and Co-founder of Terra Security. “With AI systems committing code with vulnerabilities, modifying configurations, and interacting with pipelines, organizations need visibility into real-world exploitability in production environments, not just theoretical risk. We are proud to be able to provide the means for pentesters to monitor these actions continuously using the Terra platform.”
About Terra Security
Terra Security provides Agentic AI-Powered continuous penetration testing aligned to code changes and evolving attack surfaces, combining a swarm of trained AI Agents with human supervision for safety and control. The company works with Fortune 500 organizations to ensure every attack surface is covered across the web, AI, internal apps, APIs, mobile, networks, and the cloud. Winner of the 2025 CrowdStrike/AWS/NVIDIA Cybersecurity Accelerator, and backed by SYN Ventures, Felicis, Lama Partners, SVCI, Underscore VC, Dell Technologies Capital, and Capital One Ventures. The company is based in the U.S. and Tel Aviv.