SimSpace, the AI Proving Grounds for cybersecurity, has released a new research report titled "The State of Agentic Cybersecurity." The findings reveal a significant "confidence gap" within the industry: while 78% of security leaders express high confidence in their defenses, proprietary Defensive Security Readiness (DSR) data shows that teams often score as low as 30% during actual security exercises. As AI agents become more deeply embedded in Security Operations Centers (SOCs), the report highlights a critical lack of rigorous testing, suggesting that many organizations are deploying AI tools without a full understanding of their performance in realistic, mission-critical conditions.
78% of security leaders report high confidence, yet actual readiness scores average near 30%.
73% of organizations currently utilize AI agents in their SOC at moderate to high levels.
Only 29% of organizations conduct continuous simulation testing to validate AI performance.
44% of companies test their security postures biannually, rarely, or not at all.
Initial AI tool deployment can cause a temporary 10-20% drop in performance before gains are realized.
Frequent, realistic simulations can improve readiness scores by 20-50% over four to six iterations.
The report underscores that while assistive AI is increasingly common, enterprise leaders have yet to establish a framework for developing trust in fully autonomous agents. Currently, many executives rely on "humans in the loop" to correct erratic AI behavior rather than conducting rigorous pre-deployment validation. This reliance on one-off tabletop exercises and traditional certification courses is proving insufficient against modern, AI-driven threats. SimSpace argues that for autonomous solutions to reach their full potential, they must first be proven in environments where it is safe to fail.
"Assistive AI agents are mostly what's being deployed to production today; they're not fully autonomous agents," said Lee Rossey, CTO and Co-Founder of SimSpace. "Enterprise executives have not yet focused on and/or figured out how to develop trust in agentic AI before deploying it to production."
To bridge the gap between perceived and actual security, the research suggests a shift toward continuous validation. Because AI operates around the clock, testing methodologies must mirror that continuity. The report encourages leaders to move beyond simple alert metrics and instead focus on detection success, response accuracy, and decision quality. Organizations that adopt "AI Proving Grounds"—high-fidelity virtualized environments—can train human operators and AI agents together, reaching high performance levels much faster than those using episodic testing.
"Autonomous agentic solutions are what's coming next, and enterprise executives are going to want to have complete trust in them to perform appropriately in a wide variety of situations before they get deployed to production," continued Rossey.
SimSpace utilizes its proprietary Defensive Security Readiness (DSR) metric to provide a quantitative look at a team's ability to defend against realistic threats. The data indicates that each full simulation exercise drives a 3-5 percentage point improvement in DSR scores. By establishing a baseline through these metrics, CISOs can better justify AI investments and ensure that their agentic security layers are providing the measurable protection required for global enterprise operations.
About SimSpace
SimSpace is the realistic cyber simulation infrastructure for continuously training, testing, and validating AI agents. By enabling AI agents to work together with human operators in an intelligent cyber range, SimSpace serves as the AI Proving Grounds for elite cyber teams.