
A groundbreaking evaluation by LatticeFlow AI, in partnership with SambaNova, demonstrates that leading open-source generative AI models can achieve enterprise-grade security levels comparable to or exceeding closed models when fortified with targeted guardrails. This quantifiable evidence addresses key barriers to adoption, enabling secure deployment across regulated sectors like financial services.
Organizations increasingly turn to open-source generative AI for its flexibility and reduced vendor dependency, yet security concerns have hindered widespread adoption. This evaluation provides the first empirical data showing that base models, vulnerable to adversarial inputs, can be transformed into robust solutions. By applying an input filtering layer to block malicious prompts, the models exhibited dramatic security improvements while preserving usability, offering a clear path for enterprises to innovate confidently.
LatticeFlow AI assessed five prominent open foundation models in base and guardrailed configurations, simulating real-world attack scenarios relevant to enterprises. The tests measured resilience against prompt injection and manipulation, ensuring minimal impact on service quality.
Key results include:
These outcomes confirm that guardrails enable open models to outperform many closed alternatives in secure, scalable deployments.
Financial institutions and government agencies require auditable, controllable AI systems amid regulatory pressures. This study delivers transparent metrics proving open-source models' viability with proper mitigations, facilitating compliance and risk management. “Our customers — from leading financial institutions to government agencies— are rapidly embracing open-source models and accelerated inference to power their next generation of agentic applications,” said Harry Ault, Chief Revenue Officer at SambaNova. “LatticeFlow AI’s evaluation confirms that with the right safeguards, open-source models are enterprise-ready for regulated industries, providing transformative advantages in cost efficiency, customization, and responsible AI governance.”
“At LatticeFlow AI, we provide the deepest technical controls to evaluate GenAI security and performance,” said Dr. Petar Tsankov, CEO and Co-Founder of LatticeFlow AI. “These insights give AI, risk, and compliance leaders the clarity they’ve been missing, empowering them to move forward with open-source GenAI safely and confidently.”
As AI transitions from pilots to production, these findings empower leaders to balance innovation with security, fostering adoption in high-stakes environments. LatticeFlow AI's COMPL-AI framework, developed with ETH Zurich and INSAIT, further supports EU AI Act compliance, setting a benchmark for evidence-based governance.
LatticeFlow AI sets a new standard in AI governance with deep technical assessments that enable evidence-based decisions and empower enterprises to accelerate their AI advantage. As the creator of COMPL-AI, the world’s first EU AI Act framework for Generative AI developed with ETH Zurich and INSAIT, the company combines Swiss precision with scientific rigor to scale AI governance built on evidence and trust.