The rapid adoption of large language models (LLMs) has created a significant data privacy gap for enterprises. To address this critical challenge, Confident Security has launched OpenPCC, a groundbreaking open-source standard designed to protect sensitive information during AI interactions. Built by a team of engineers from Databricks and Apple, this new protocol ensures that confidential data remains secure when using both cloud-based and on-premises AI models.
Confident Security released OpenPCC, an open-source standard for securing data in AI model interactions.
It prevents the leakage of prompts, outputs, and logs, protecting PII, PHI, and PCI data.
The framework acts as a security layer between enterprise systems and AI models with minimal code changes.
Key components include SDKs, a compliant inference server, and core privacy libraries for encrypted communication.
The standard is released under open-source licenses to ensure community-driven, neutral governance.
This addresses the critical risk of employees pasting internal data into AI tools, a common security vulnerability.
OpenPCC directly tackles the growing enterprise risk where internal data is pasted into AI tools. Statistics reveal that 78% of employees have engaged in this behavior, with one in five cases involving sensitive personal or regulated data. The standard solves this by operating as a protective layer that keeps all user information fully encrypted and inaccessible to unauthorized parties throughout the AI process. This ensures that confidential data is never exposed, whether companies are using public cloud AI services or their own private deployments.
The release includes a comprehensive suite of tools to establish a new benchmark for AI privacy. The OpenPCC specification and SDKs provide a standardized protocol under the Apache 2.0 license. A compliant inference server demonstrates how to deploy and verify private AI interactions in production. Core privacy libraries, such as 'Two-Way' for encrypted streaming and implementations of Binary HTTP and Oblivious HTTP, form the technical backbone for fully private communication between users and AI systems. By open-sourcing the framework and planning an independent foundation for its stewardship, Confident Security aims to create a universally trusted standard that prevents future restrictive license changes.
The launch of OpenPCC represents a pivotal step towards reconciling the demands of rapid AI innovation with the non-negotiable requirement for data security. By providing a provable and open standard for privacy, it empowers enterprises to adopt AI with confidence, ensuring that sensitive data remains protected throughout the entire lifecycle of an AI interaction.
Confident Security builds provably private infrastructure for AI. They’re the creators behind CONFSEC, an enterprise-grade privacy platform, and OpenPCC, an open-source standard based on Apple’s Private Cloud Compute (PCC). CONFSEC and OpenPCC are thoroughly tested, externally audited, secure, production-ready, and deployable on any cloud or on your own bare metal. Using a combination of OHTTP, blind signatures, remote attestation, TEEs, TPMs, transparency logs, and more, Confident Security provably guarantees that nobody can see the user’s prompt.
The company is led by Jonathan Mortensen, a two-time founder who has previously sold companies to BlueVoyant and Databricks. It is built by a team with deep expertise in secure systems, AI, infrastructure, and trusted computing, with backgrounds from Google, Apple, Databricks, Red Hat, and HashiCorp.