Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Home
  • /
  • News
  • /
  • AI
  • /
  • AI Cloud
  • /
  • Lambda Expands NVIDIA Collaboration with New AI Infrastructure
  • AI Cloud

Lambda Expands NVIDIA Collaboration with New AI Infrastructure


Lambda Expands NVIDIA Collaboration with New AI Infrastructure
  • by: Source Logo
  • |
  • March 17, 2026

Lambda, the Superintelligence Cloud, today announced at NVIDIA GTC 2026 a series of significant expansions to its collaboration with NVIDIA, including its role as a launch partner for NVIDIA's Vera CPU platform and NVIDIA STX. The company also revealed a large-scale deployment of NVIDIA Quantum-X800 InfiniBand Photonics Co-Packaged Optics (CPO) networking in an AI factory with 10,000+ NVIDIA Blackwell Ultra GPUs, and the launch of Lambda Bare Metal Instances as a core cloud offering for frontier AI workloads.

Quick Intel

  • Lambda announced it is a launch partner for NVIDIA's Vera CPU platform and NVIDIA STX rack-scale architecture.

  • The company is leading a large-scale deployment of NVIDIA Quantum-X800 InfiniBand Photonics Co-Packaged Optics (CPO) networking in an AI factory with 10,000+ NVIDIA Blackwell Ultra GPUs.

  • Lambda has launched Bare Metal Instances as a core cloud offering, providing direct hardware access for distributed training workloads.

  • The new NVIDIA Vera platform, with custom Olympus CPU cores, enables thousands of sandboxed AI environments to run in parallel for reinforcement learning and agentic workloads.

  • NVIDIA STX brings a modular reference architecture for rack-scale AI storage platforms, accelerating inference and training through optimized KV-cache management.

  • The announcements reflect Lambda's decade-long collaboration with NVIDIA and its commitment to advancing the Superintelligence Cloud platform.

Lambda Expands Collaboration with NVIDIA

"Lambda's mission is to expand humanity's energy and computational capacity," said Stephen Balaban, Co-founder and CEO of Lambda. "Today's announcements advance that mission, enabling the world's top AI teams with the infrastructure they need to do their best work."

As a launch partner for NVIDIA's Vera CPU platform, Lambda is bringing frontier-scale infrastructure to its AI cloud platform. With custom Olympus CPU cores and up to 1.2 TB/s of memory bandwidth, Vera keeps thousands of sandboxed AI environments running in parallel, ensuring NVIDIA GPUs stay fully utilized across reinforcement learning (RL) and agentic workloads. It gives frontier labs access to a compute architecture designed for the demands of next-generation agentic AI systems.

While the industry is shifting toward agentic AI, long-term memory and the processing of massive context windows are critical bottlenecks in inference. Powered by NVIDIA Vera Rubin, BlueField-4, Spectrum-X Networking, and NVIDIA AI software, NVIDIA STX brings a modular reference architecture for rack-scale AI storage platforms, accelerating inference, analytics, and training through next-generation hardware integration and optimized KV-cache management.

Large-Scale Deployments with NVIDIA Quantum-X800 CPO Networking

As AI factories scale into the tens of thousands of AI accelerators, network architecture becomes as important as the accelerators themselves. Co-packaged Optics (CPO) networking delivers higher efficiency, longer sustained application runtime, and greater resiliency than traditional pluggable transceivers.

Lambda is leading one of the largest deployments of NVIDIA Quantum-X800 InfiniBand Photonics Co-Packaged Optics switches to date, in an AI factory with 10,000+ NVIDIA Blackwell Ultra GPUs. The deployment builds on Lambda's November announcement of early CPO adoption.

"The race to build AI factories isn't won on GPU counts alone. Network architecture is what determines whether those systems can perform at scale," said Dave Salvator, Director of Accelerated Computing at NVIDIA. "Getting this right is what allows AI infrastructure to power services used by hundreds of millions of people around the world."

New Lambda Bare Metal Instances for Frontier AI Workloads

Lambda's new Bare Metal Instances, paired with custom networking and system-level optimizations, provide infrastructure and research teams with direct hardware access and eliminate virtualization overhead for distributed training workloads.

The new offering strengthens Lambda's full-stack AI infrastructure platform, expanding the tools available to frontier labs, enterprises, and hyperscalers. Infrastructure teams gain complete control over the hardware stack while benefiting from Lambda's reliability, uptime, and operational expertise.

Today's announcements reflect Lambda's decade-long collaboration with NVIDIA, as well as the company's commitment to continuously develop its Superintelligence Cloud: a platform engineered for fast deployment, density, and cooling to meet modern AI demands and maximize the intelligence produced per watt.

About Lambda

Lambda, The Superintelligence Cloud, is a leader in AI cloud infrastructure serving tens of thousands of customers.

Founded in 2012 by machine learning engineers published at NeurIPS and ICCV, Lambda builds supercomputers for AI training and inference.

  • AI InfrastructureSuperintelligence CloudAI FactoryCloud Computing
News Disclaimer
  • Share