Darktrace has launched Darktrace / SECURE AI, a new behavioral AI security product designed to provide oversight and control over enterprise AI adoption. The solution applies Darktrace's behavioral analysis expertise to understand the intent and risk of AI systems, securing AI embedded in SaaS apps, AI agents in cloud environments, and employee use of shadow AI by monitoring prompts, data flows, and access patterns in real-time.
Darktrace launches SECURE AI for behavioral oversight of enterprise AI.
It monitors AI behavior, intent, and risk across human and agent interactions.
The solution covers embedded SaaS AI, cloud-hosted models, and autonomous agents.
It detects anomalous data uploads and prompt manipulation in real-time.
Research shows 76% of security pros are concerned about AI agent security.
The product integrates with the existing Darktrace ActiveAI Security Platform.
Traditional security controls and static guardrails are insufficient for managing dynamic, language-driven AI systems that can "drift" or behave unexpectedly. Darktrace / SECURE AI uses the company's core behavioral AI approach to continuously analyze how AI systems actually operate—observing prompts, data access patterns, and interactions—to detect emerging risks and anomalous activity that rule-based tools would miss, providing security based on observed behavior rather than predefined policies alone.
The solution is designed to address the fragmented AI landscape within enterprises. It provides visibility and control across four key areas: monitoring generative AI usage in tools like ChatGPT and Copilot; tracking autonomous AI agents and their permissions; evaluating risks in AI development and deployment; and discovering and managing unapproved "shadow AI" tools. This holistic approach aims to give CISOs a practical way to govern AI without stifling innovation.
Darktrace's own research underscores the urgent need for such oversight. The company observed accounts uploading an average of 4,700 pages of documents in anomalous data transfers to generative AI services, with some accounts averaging over 200,000 pages. This scale of potential data leakage represents a significant business risk. SECURE AI is designed to detect and intervene in such activity, preventing sensitive data exposure, enforcing usage policies, and identifying signs of AI system manipulation.
"Securing AI without prompt visibility is like securing email without reading the message body," said Jack Stockdale, CTO of Darktrace. By extending its behavioral analysis to the AI layer, Darktrace aims to enable organizations to innovate with AI confidently, providing the security foundation needed as AI moves from experimentation to mission-critical production.