CalypsoAI’s Insider AI Threat Report, based on a survey of over 1,000 U.S. office workers, uncovers alarming trends in AI misuse within workplaces. The report highlights how employees, driven by efficiency, are bypassing company policies, posing significant security risks across industries.
52% of U.S. employees use AI despite violating company policies.
45% trust AI more than coworkers; 38% prefer AI managers.
34% would quit if employers banned AI usage.
28% have accessed sensitive data using AI tools.
Finance (60%) and security (42%) industries show high violation rates.
C-suite and entry-level workers often lack AI agent knowledge.
The report reveals that 87% of U.S. workers operate under an AI policy, reflecting the growing adoption of enterprise AI. However, 52% are willing to violate these policies to streamline tasks, and 25% have used AI without verifying its permissibility. This disregard for regulations underscores a critical need for stronger AI governance. "These numbers should be a wake-up call," said Donnchadh Casey, CEO of CalypsoAI. "We're seeing executives racing to implement AI without fully understanding the risks, frontline employees using it unsupervised, and even trusted security professionals breaking their own rules. We know inappropriate use of AI can be catastrophic for enterprises, and this isn't a future threat – it's already happening inside organizations today."
A striking 45% of employees trust AI more than their coworkers, while 38% would prefer an AI manager over a human. This shift in trust extends to the C-suite, where 50% favor AI managers, though 34% struggle to distinguish AI agents from human employees. Additionally, 38% of executives admit they don’t understand what an AI agent is, highlighting a knowledge gap at leadership levels.
The misuse of AI extends to sensitive data, with 28% of workers admitting to using AI to access restricted information and another 28% submitting proprietary company data to AI systems for task completion. At the C-suite level, 35% have engaged in similar practices, amplifying risks to organizational security and intellectual property.
Highly regulated sectors face significant challenges. In finance, 60% of workers admit to violating AI policies, with a third accessing restricted data via AI. In the security industry, 42% knowingly bypass policies, and 58% trust AI more than colleagues. Healthcare sees only 55% compliance with AI policies, with 27% preferring AI supervisors. These trends signal a pressing need for robust AI security measures.
Entry-level employees are particularly vulnerable, with 37% expressing no guilt over violating AI policies and 21% citing unclear regulations as a reason for non-compliance. Additionally, 33% are unaware of what an AI agent is, indicating a need for better education and policy clarity to mitigate risks.
CalypsoAI’s report emphasizes the urgent need for organizations to redefine AI security, focusing not only on technology but also on employee behavior and trust. By addressing these gaps, companies can safeguard against the growing risks of AI misuse in the workplace.
CalypsoAI provides the only full-lifecycle platform to secure AI models and applications at the inference layer, deploying Agentic Warfare™ to protect organizations from evolving adversaries. Trusted by global enterprises including Palantir and SGK, CalypsoAI's industry-leading team of experts is doing the hard miles to ensure security keeps pace with AI innovation.