Home
News
Tech Grid
Data & Analytics
Data Processing Data Management Analytics Data Infrastructure Data Integration & ETL Data Governance & Quality Business Intelligence DataOps Data Lakes & Warehouses Data Quality Data Engineering Big Data
Enterprise Tech
Digital Transformation Enterprise Solutions Collaboration & Communication Low-Code/No-Code Automation IT Compliance & Governance Innovation Enterprise AI Data Management HR
Cybersecurity
Risk & Compliance Data Security Identity & Access Management Application Security Threat Detection & Incident Response Threat Intelligence AI Cloud Security Network Security Endpoint Security Edge AI
AI
Ethical AI Agentic AI Enterprise AI AI Assistants Innovation Generative AI Computer Vision Deep Learning Machine Learning Robotics & Automation LLMs Document Intelligence Business Intelligence Low-Code/No-Code Edge AI Automation NLP AI Cloud
Cloud
Cloud AI Cloud Migration Cloud Security Cloud Native Hybrid & Multicloud Cloud Architecture Edge Computing
IT & Networking
IT Automation Network Monitoring & Management IT Support & Service Management IT Infrastructure & Ops IT Compliance & Governance Hardware & Devices Virtualization End-User Computing Storage & Backup
Human Resource Technology Agentic AI Robotics & Automation Innovation Enterprise AI AI Assistants Enterprise Solutions Generative AI Regulatory & Compliance Network Security Collaboration & Communication Business Intelligence Leadership Artificial Intelligence Cloud
Finance
Insurance Investment Banking Financial Services Security Payments & Wallets Decentralized Finance Blockchain
HR
Talent Acquisition Workforce Management AI HCM HR Cloud Learning & Development Payroll & Benefits HR Analytics HR Automation Employee Experience Employee Wellness
Marketing
AI Customer Engagement Advertising Email Marketing CRM Customer Experience Data Management Sales Content Management Marketing Automation Digital Marketing Supply Chain Management Communications Business Intelligence Digital Experience SEO/SEM Digital Transformation Marketing Cloud Content Marketing E-commerce
Consumer Tech
Smart Home Technology Home Appliances Consumer Health AI
Interviews
Think Stack
Press Releases
Articles
Resources
  • Generative AI

Open GenAI Models Achieve Enterprise Security with Guardrails


Open GenAI Models Achieve Enterprise Security with Guardrails
  • Source: Source Logo
  • |
  • September 23, 2025

A groundbreaking evaluation by LatticeFlow AI, in partnership with SambaNova, demonstrates that leading open-source generative AI models can achieve enterprise-grade security levels comparable to or exceeding closed models when fortified with targeted guardrails. This quantifiable evidence addresses key barriers to adoption, enabling secure deployment across regulated sectors like financial services.

Quick Intel

  • Open GenAI models' security scores rose from 1.8% to 99.6% with guardrails.
  • Evaluation tested top five open models: Qwen3-32B, DeepSeek-V3, Llama variants.
  • All models maintained over 98% quality of service post-guardrails.
  • Focuses on cybersecurity risks like prompt injection and manipulation.
  • Supports flexibility, cost savings, and innovation without vendor lock-in.
  • Developed with EU AI Act framework COMPL-AI for governance.

Revolutionizing Open-Source GenAI Security

Organizations increasingly turn to open-source generative AI for its flexibility and reduced vendor dependency, yet security concerns have hindered widespread adoption. This evaluation provides the first empirical data showing that base models, vulnerable to adversarial inputs, can be transformed into robust solutions. By applying an input filtering layer to block malicious prompts, the models exhibited dramatic security improvements while preserving usability, offering a clear path for enterprises to innovate confidently.

Evaluation Methodology and Results

LatticeFlow AI assessed five prominent open foundation models in base and guardrailed configurations, simulating real-world attack scenarios relevant to enterprises. The tests measured resilience against prompt injection and manipulation, ensuring minimal impact on service quality.

Key results include:

  • DeepSeek R1: Security score from 1.8% to 98.6%
  • LLaMA-4 Maverick: From 33.5% to 99.4%
  • LLaMA-3.3 70B Instruct: From 51.8% to 99.4%
  • Qwen3-32B: From 56.3% to 99.6%
  • DeepSeek V3: From 61.3% to 99.4%

These outcomes confirm that guardrails enable open models to outperform many closed alternatives in secure, scalable deployments.

Implications for Regulated Industries

Financial institutions and government agencies require auditable, controllable AI systems amid regulatory pressures. This study delivers transparent metrics proving open-source models' viability with proper mitigations, facilitating compliance and risk management. “Our customers — from leading financial institutions to government agencies— are rapidly embracing open-source models and accelerated inference to power their next generation of agentic applications,” said Harry Ault, Chief Revenue Officer at SambaNova. “LatticeFlow AI’s evaluation confirms that with the right safeguards, open-source models are enterprise-ready for regulated industries, providing transformative advantages in cost efficiency, customization, and responsible AI governance.”

“At LatticeFlow AI, we provide the deepest technical controls to evaluate GenAI security and performance,” said Dr. Petar Tsankov, CEO and Co-Founder of LatticeFlow AI. “These insights give AI, risk, and compliance leaders the clarity they’ve been missing, empowering them to move forward with open-source GenAI safely and confidently.”

As AI transitions from pilots to production, these findings empower leaders to balance innovation with security, fostering adoption in high-stakes environments. LatticeFlow AI's COMPL-AI framework, developed with ETH Zurich and INSAIT, further supports EU AI Act compliance, setting a benchmark for evidence-based governance.

About LatticeFlow AI

LatticeFlow AI sets a new standard in AI governance with deep technical assessments that enable evidence-based decisions and empower enterprises to accelerate their AI advantage. As the creator of COMPL-AI, the world’s first EU AI Act framework for Generative AI developed with ETH Zurich and INSAIT, the company combines Swiss precision with scientific rigor to scale AI governance built on evidence and trust.

  • Gen AI SecurityOpen Source AILattice Flow AIAI GuardrailsEnterprise AI
News Disclaimer
  • Share