Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Generative AI

Open GenAI Models Achieve Enterprise Security with Guardrails


Open GenAI Models Achieve Enterprise Security with Guardrails
  • by: Business Wire
  • |
  • September 23, 2025

A groundbreaking evaluation by LatticeFlow AI, in partnership with SambaNova, demonstrates that leading open-source generative AI models can achieve enterprise-grade security levels comparable to or exceeding closed models when fortified with targeted guardrails. This quantifiable evidence addresses key barriers to adoption, enabling secure deployment across regulated sectors like financial services.

Quick Intel

  • Open GenAI models' security scores rose from 1.8% to 99.6% with guardrails.
  • Evaluation tested top five open models: Qwen3-32B, DeepSeek-V3, Llama variants.
  • All models maintained over 98% quality of service post-guardrails.
  • Focuses on cybersecurity risks like prompt injection and manipulation.
  • Supports flexibility, cost savings, and innovation without vendor lock-in.
  • Developed with EU AI Act framework COMPL-AI for governance.

Revolutionizing Open-Source GenAI Security

Organizations increasingly turn to open-source generative AI for its flexibility and reduced vendor dependency, yet security concerns have hindered widespread adoption. This evaluation provides the first empirical data showing that base models, vulnerable to adversarial inputs, can be transformed into robust solutions. By applying an input filtering layer to block malicious prompts, the models exhibited dramatic security improvements while preserving usability, offering a clear path for enterprises to innovate confidently.

Evaluation Methodology and Results

LatticeFlow AI assessed five prominent open foundation models in base and guardrailed configurations, simulating real-world attack scenarios relevant to enterprises. The tests measured resilience against prompt injection and manipulation, ensuring minimal impact on service quality.

Key results include:

  • DeepSeek R1: Security score from 1.8% to 98.6%
  • LLaMA-4 Maverick: From 33.5% to 99.4%
  • LLaMA-3.3 70B Instruct: From 51.8% to 99.4%
  • Qwen3-32B: From 56.3% to 99.6%
  • DeepSeek V3: From 61.3% to 99.4%

These outcomes confirm that guardrails enable open models to outperform many closed alternatives in secure, scalable deployments.

Implications for Regulated Industries

Financial institutions and government agencies require auditable, controllable AI systems amid regulatory pressures. This study delivers transparent metrics proving open-source models' viability with proper mitigations, facilitating compliance and risk management. “Our customers — from leading financial institutions to government agencies— are rapidly embracing open-source models and accelerated inference to power their next generation of agentic applications,” said Harry Ault, Chief Revenue Officer at SambaNova. “LatticeFlow AI’s evaluation confirms that with the right safeguards, open-source models are enterprise-ready for regulated industries, providing transformative advantages in cost efficiency, customization, and responsible AI governance.”

“At LatticeFlow AI, we provide the deepest technical controls to evaluate GenAI security and performance,” said Dr. Petar Tsankov, CEO and Co-Founder of LatticeFlow AI. “These insights give AI, risk, and compliance leaders the clarity they’ve been missing, empowering them to move forward with open-source GenAI safely and confidently.”

As AI transitions from pilots to production, these findings empower leaders to balance innovation with security, fostering adoption in high-stakes environments. LatticeFlow AI's COMPL-AI framework, developed with ETH Zurich and INSAIT, further supports EU AI Act compliance, setting a benchmark for evidence-based governance.

About LatticeFlow AI

LatticeFlow AI sets a new standard in AI governance with deep technical assessments that enable evidence-based decisions and empower enterprises to accelerate their AI advantage. As the creator of COMPL-AI, the world’s first EU AI Act framework for Generative AI developed with ETH Zurich and INSAIT, the company combines Swiss precision with scientific rigor to scale AI governance built on evidence and trust.

  • Gen AI SecurityOpen Source AILattice Flow AIAI GuardrailsEnterprise AI
News Disclaimer
  • Share