Pillar Security, a leading AI security firm, has uncovered a critical supply chain vulnerability in the GGUF model format, termed "Poisoned GGUF Templates." Announced on July 9, 2025, in Tel Aviv, Israel, this novel attack vector targets the AI inference pipeline, enabling attackers to embed malicious instructions that compromise AI outputs while evading existing security measures.
Pillar Security identifies "Poisoned GGUF Templates" attack on Hugging Face.
Malicious instructions in GGUF files bypass AI security controls.
Over 1.5 million GGUF models on public platforms are at risk.
Attack exploits trust in community-sourced AI models, creating backdoors.
Responsible disclosure to Hugging Face and LM Studio in June 2025.
Mitigation requires auditing GGUF files and enhancing template security.
The "Poisoned GGUF Templates" attack manipulates the GPT-Generated Unified Format (GGUF), a widely used standard for AI deployment with over 1.5 million files on platforms like Hugging Face. Attackers embed malicious instructions within chat templates, which define the conversational structure of large language models (LLMs). “What makes this attack so effective is the disconnect between what's shown in the repository interface and what's actually running on users’ machines,” said Ariel Fogel, who led the research at Pillar Security. This allows a persistent compromise that remains invisible during casual testing and evades most security tools.
The attack exploits trust in community-sourced AI models by embedding conditional malicious instructions in GGUF file chat templates. Uploaded to public repositories, these poisoned models display clean templates online while containing malicious versions in downloaded files. “We’re still in the early days of understanding the full range of AI supply chain security considerations,” said Ziv Karliner, CTO and Co-founder of Pillar Security. “Our research shows how the trust that powers platforms and open-source communities—while essential to AI progress—can also open the door to deeply embedded threats.” A single compromised model can impact thousands of downstream applications, redefining the AI supply chain as a critical attack surface.
This vulnerability targets the AI inference pipeline, a blind spot in current security architectures. Most defenses focus on validating inputs or filtering outputs, but the attack operates within the trusted inference environment, bypassing system prompts and runtime monitoring. By embedding backdoors directly into model files, attackers eliminate the need for clever prompt engineering, making the compromise stealthy and persistent across user interactions.
Pillar Security responsibly disclosed the findings to Hugging Face and LM Studio in June 2025. The platforms indicated that they do not classify this as a direct vulnerability, placing the onus on users to vet models. This response underscores a significant accountability gap in the AI ecosystem, highlighting the need for improved security practices in model distribution and deployment.
To counter this threat, Pillar Security recommends immediate action. Organizations should audit GGUF files for suspicious template patterns, such as unexpected conditional logic or hidden instructions. Moving beyond prompt-based controls, comprehensive template and pipeline security is essential. Long-term strategies include implementing model provenance and cryptographic signing to ensure only verified templates are used. The Pillar platform actively discovers and flags malicious GGUF files, enhancing protection against such risks.
Pillar Security’s discovery highlights the evolving challenges in AI supply chain security. By addressing the "Poisoned GGUF Templates" vulnerability, organizations can strengthen their defenses, ensuring safer adoption of AI technologies while navigating the complexities of open-source model ecosystems.
Pillar Security is a leading AI-security platform, providing companies full visibility and control to build and run secure AI systems. Founded by experts in offensive and defensive cybersecurity, Pillar secures the entire AI lifecycle - from development to deployment - through AI Discovery, AI Security Posture Management (AI-SPM), AI Red Teaming, and Adaptive Runtime Guardrails. Pillar empowers organizations to prevent data leakage, neutralize AI-specific threats, and comply with evolving regulations.