Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Agentic AI

Liquid AI’s LFM2 Sets New Standard for Edge AI Performance


Liquid AI’s LFM2 Sets New Standard for Edge AI Performance
  • by: Business Wire
  • |
  • July 11, 2025

Liquid AI has launched its next-generation Liquid Foundation Models (LFM2), redefining performance standards for small foundation models in edge AI applications. Open-sourced on Hugging Face, LFM2 delivers unmatched speed, energy efficiency, and quality, surpassing global competitors like Alibaba’s Qwen3 and Google’s Gemma. Designed for on-device deployment, these models unlock new possibilities for real-time AI across industries.

Quick Intel

  • LFM2 achieves 200% higher throughput and lower latency than Qwen3, Gemma.

  • Outperforms competitors in instruction-following and function calling.

  • 300% improved training efficiency over previous LFM versions.

  • Open-sourced on Hugging Face, with weights available for testing.

  • Optimized for edge devices like phones, laptops, and robots.

  • Supports secure, private AI with millisecond latency for real-time use.

Unprecedented Speed and Efficiency

LFM2’s hybrid architecture, built with structured, adaptive operators, delivers 200% higher throughput and lower latency compared to Qwen3, Gemma 3n Matformer, and other autoregressive models on CPU. “At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,” said Ramin Hasani, co-founder and CEO of Liquid AI. This efficiency makes LFM2 ideal for resource-constrained environments like edge devices, enabling millisecond latency and offline resilience.

Superior Performance in AI Agent Capabilities

LFM2 excels in instruction-following and function calling, critical for building reliable AI agents. Evaluations show LFM2-1.2B performs competitively with Qwen3-1.7B, despite being 47% smaller, while LFM2-700M outperforms Gemma 3 1B IT. These capabilities position LFM2 as a top choice for local and edge use cases, from consumer electronics to robotics.

Cost-Effective and Scalable AI

With a 300% improvement in training efficiency over prior LFMs, LFM2 offers a cost-effective solution for developing general-purpose AI systems. Its design supports flexible deployment on CPUs, GPUs, and NPUs, making it accessible for devices like smartphones, laptops, and wearables. Liquid AI’s integration of LFM2 into its Edge AI platform and an upcoming iOS-native app further enhances its applicability.

Privacy and Market Potential

By shifting AI from cloud to on-device, LFM2 ensures data-sovereign privacy, a critical feature for industries like finance, e-commerce, and cybersecurity. Liquid AI projects the market for compact foundation models to reach $1 trillion by 2035, driven by demand in high-growth sectors. The company is already engaging with Fortune 500 clients to deploy these ultra-efficient models.

Liquid AI’s LFM2 marks a significant advancement in edge AI, offering unmatched performance and efficiency. By open-sourcing these models, Liquid AI empowers developers and enterprises to build fast, private, and cost-effective AI solutions for the future.

 

About Liquid AI

Liquid AI is at the forefront of artificial intelligence innovation, developing foundation models that set new standards for performance and efficiency. With the mission to build efficient, general-purpose AI systems at every scale, Liquid AI continues to push the boundaries of how much intelligence can be packed into phones, laptops, cars, satellites, and other devices.

News Disclaimer
  • Share