Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Enterprise AI

Lambda Adopts NVIDIA Co-Packaged Optics for AI Factories


Lambda Adopts NVIDIA Co-Packaged Optics for AI Factories
  • by: Source Logo
  • |
  • November 21, 2025

The relentless scaling of AI models is pushing data center infrastructure to its limits, with networking emerging as a critical bottleneck. To address this, Lambda has announced it is among the first AI infrastructure providers to integrate NVIDIA's groundbreaking silicon photonics-based co-packaged optics networking. This technology is poised to redefine the performance and efficiency of large-scale GPU clusters, known as AI factories.

Quick Intel

  • Lambda is an early adopter of NVIDIA's co-packaged optics (CPO) for its AI infrastructure solutions.

  • NVIDIA's Quantum-X Photonics technology delivers 3.5x better power efficiency and 10x greater resiliency.

  • Co-packaged optics integrate optical components directly onto network switches, overcoming a major AI scaling bottleneck.

  • This results in increased compute per watt, enhanced reliability, and faster AI model training and inference.

  • The simplified design reduces operational costs and accelerates the deployment of massive GPU clusters.

  • This technology is foundational for building the next generation of scalable AI factories.

Solving the AI Networking Bottleneck

As AI training runs expand to encompass hundreds of thousands of GPUs, the network fabric connecting them has become as critical as the computational power of the GPUs themselves. Traditional networking approaches with pluggable transceivers are struggling to keep pace with the demands of frontier AI workloads. NVIDIA's co-packaged optics technology directly addresses this by integrating optical components next to the network switch silicon, significantly improving signal integrity and reducing power consumption.

The Tangible Benefits of Co-Packaged Optics

The shift to this new architecture delivers measurable, significant gains for large-scale AI deployments. NVIDIA reports that its Photonics technology provides 3.5x better power efficiency, 5x longer sustained application runtime, and 10x greater resiliency compared to traditional methods. Ken Patchett, VP of DC Infrastructure at Lambda, emphasized the operational impact, stating, “By integrating optical components directly next to the network switches, we believe our customers can deploy AI infrastructure faster while significantly reducing operational costs – essential as we continue to scale to support frontier AI workloads.”

Building the Foundation for AI Factories

This innovation is central to the concept of the AI factory—a purpose-built infrastructure designed for generating intelligence at a massive scale. Gilad Shainer, senior vice president of networking at NVIDIA, explained, “By integrating silicon photonics directly into switches, NVIDIA Quantum-X silicon photonics networking switches enable the kind of scalable network fabric that makes massive-GPU AI factories possible.” The simplified design of CPO also means fewer components to install and maintain, streamlining operations for providers like Lambda.

The early adoption of NVIDIA's co-packaged optics solidifies Lambda's position at the forefront of AI infrastructure. This move, building on its recent achievement of NVIDIA Exemplar Cloud status, provides the critical networking foundation required for enterprises and research labs to efficiently build and scale the next generation of AI.

About Lambda

Lambda, The Superintelligence Cloud, is a leader in AI cloud infrastructure serving tens of thousands of customers.

Founded in 2012 by machine learning engineers who published at NeurIPS and ICCV, Lambda builds supercomputers for AI training and inference.

Our customers range from AI researchers to enterprises and hyperscalers.

Lambda’s mission is to make compute as ubiquitous as electricity and give everyone the power of superintelligence. One person, one GPU.

  • AI InfrastructureData CenterNVIDIASilicon PhotonicsGPU
News Disclaimer
  • Share