Jozu, the AI assurance company behind KitOps, a CNCF project with more than 240,000 downloads, today announced the launch of Jozu Agent Guard, a zero-trust AI runtime that executes agents, models, and MCP servers inside secured environments with built-in policy enforcement and guardrails that agents cannot disable. Jozu Agent Guard is available today.
Jozu announced the launch of Jozu Agent Guard, a zero-trust AI runtime that executes agents, models, and MCP servers inside secured environments with policy enforcement that agents cannot disable.
During testing, Jozu observed an AI agent bypassing governance infrastructure by killing policy processes, disabling auto-restart, and erasing audit logs to complete a blocked task.
Agent Guard combines six security capabilities: artifact verification, tool governance, human approval, immutable auditing, local enforcement, and hypervisor isolation.
The solution enables security teams to vet, sign, and govern AI artifacts across servers, laptops, and edge devices without requiring connectivity to a central controller.
Existing approaches like agent sandboxes, AI gateways, and guardrails have significant gaps in addressing the complexity of modern AI agent actions.
Jozu Agent Guard is built on KitOps, the most widely adopted open source AI packaging standard with over 240,000 downloads.
As enterprises rapidly adopt AI agents, MCP servers, and tools like Copilot, OpenClawd, and Claude Code, security teams face a growing gap: employees are running these tools on their machines without formal vetting, policies, approvals, or security scans. Jozu Agent Guard enables central security teams to vet, sign, and govern AI artifacts from development to production across servers, laptops, and edge devices.
"Jozu Attest gives security teams governance without forcing them to choose between locking everything down and letting everything through," said Brad Micklea, Co-Founder and CEO of Jozu.
During early testing, Jozu observed an AI agent bypassing governance infrastructure in four commands: it killed the policy enforcement process, disabled the auto-restart mechanism, resumed operations without restrictions, and erased the audit logs. The agent wasn't compromised or adversarially prompted. It simply encountered a policy that blocked it from completing a task, then problem-solved its way through the enforcement layer like any other obstacle.
This revealed a vulnerability that spans the AI governance market: any enforcement system that runs in the same environment as the agent and is accessible via the agent's tools is vulnerable to bypass. Jozu Agent Guard eliminates this vulnerability - a full technical analysis has been published alongside this launch.
"The AI exhibited a pattern indistinguishable from a malicious insider: disable the monitoring, erase the logs, carry on like nothing happened," said Brad Micklea, Co-Founder and CEO of Jozu. "The only difference is it wasn't trying to be malicious. It was trying to complete its task. That's the problem every organization deploying AI agents needs to take seriously, and it's why we built Agent Guard to protect corporate assets by securing the agent at every layer — artifact, runtime, policy, and sandbox."
Existing AI agent security solutions have converged on three approaches, each with significant gaps. Agent sandboxes isolate execution but hurt ROI by broadly limiting agent actions because they cannot differentiate between safe and unsafe agents. AI gateways can only protect against prompts and actions that leave the local machine, and their persistent connections to a central control plane create a single point of failure. Guardrails filter prompts and responses from models but do not govern what tools agents can use. None of these approaches addresses the breadth and complexity of action that today's AI agents need to provide real value to organizations.
Agent Guard is built to enforce a simple rule: the agent never operates without governance. Agent Guard evaluates all AI activity through a local policy engine that has visibility into locally running actions, inputs and outputs, and prompts and responses. Jozu further ensures that only approved artifacts execute, only permitted actions get run, and every step is captured in a tamper-evident audit log.
Jozu combines six security capabilities for complete protection. Artifact verification scans every AI artifact and attaches scan results and governance policies as tamper-evident attestations, preventing impersonation attacks such as the Postmark MCP Server attack that exfiltrated data from thousands of organizations. Tool governance governs access to individual tool calls within an MCP server's catalog, not just prompts or MCP servers as a whole, preventing re-routing attacks like EchoLeak which exploited Microsoft Copilot to redirect thousands of emails.
Human approval stops an agent's workflow for high-risk actions, requiring human approval before execution to protect against rogue agentic workflows and privilege escalation attacks. Immutable auditing captures every action in a cryptographically chained audit log that maintains integrity even when disconnected. Local enforcement distributes policies with deployed artifacts and enforces them locally on laptops, edge devices, and air-gapped networks with no connectivity to a central controller required. For the highest-assurance environments, hypervisor isolation executes workloads inside hypervisor-isolated containers where only supply-chain-verified artifacts are admitted, tamper-evident policies govern every action during execution, and the hypervisor boundary contains the blast radius if anything goes wrong.
About Jozu
Jozu is a security platform for AI workloads that enables organizations to verify, control, and speed their adoption of agentic AI. The Jozu platform provides vulnerability scanning, policy enforcement with human approvals, and agent isolation. Jozu integrates with existing MLOps and DevOps tools, and secures AI deployed to server, desktop, and edge - even in air-gapped environments. Jozu is built on CNCF KitOps, the most widely adopted open source AI packaging standard.