Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Home
  • /
  • News
  • /
  • AI
  • /
  • Agentic AI
  • /
  • NVIDIA Enters Production With Dynamo Inference OS for AI Factories
  • Agentic AI

NVIDIA Enters Production With Dynamo Inference OS for AI Factories


NVIDIA Enters Production With Dynamo Inference OS for AI Factories
  • by: Source Logo
  • |
  • March 17, 2026

NVIDIA today announced NVIDIA Dynamo 1.0, open source software for generative and agentic inference at scale, with widespread global adoption. Together with the NVIDIA Blackwell platform, Dynamo 1.0 enables cloud providers, AI innovators and global enterprises to deliver high-performance AI inference with unmatched scale, efficiency and speed.

Quick Intel

  • NVIDIA announced Dynamo 1.0, an open source inference operating system for AI factories, now in production with widespread global adoption.

  • Dynamo boosts the inference performance of NVIDIA Blackwell GPUs by up to 7x, lowering token costs and increasing revenue opportunity for millions of GPUs.

  • The software functions as a distributed operating system for AI factories, orchestrating GPU and memory resources across clusters for complex AI workloads.

  • Dynamo and TensorRT-LLM optimizations are integrated into open source frameworks including LangChain, vLLM, SGLang, and LMCache.

  • The NVIDIA inference platform is supported by major cloud providers AWS, Azure, Google Cloud, and OCI, along with NVIDIA Cloud Partners and AI-native companies.

  • Adopters include CoreWeave, Nebius, Together AI, Cursor, Perplexity, PayPal, Pinterest, ByteDance, Meituan, and AstraZeneca.

NVIDIA Dynamo 1.0: The Operating System for AI Factories

As agentic AI systems move into production across industries, scaling inference within a data center has become a complex challenge of resource orchestration, with requests of varying sizes and modalities, as well as performance objectives, arriving in unpredictable bursts.

Just as a computer's operating system coordinates hardware and applications, Dynamo 1.0 functions as the distributed "operating system" of AI factories, seamlessly orchestrating GPU and memory resources across the cluster to power complex AI workloads. In recent industry benchmarks, Dynamo boosted the inference performance of NVIDIA Blackwell GPUs by up to 7x, lowering token cost and increasing revenue opportunity for millions of GPUs with free, open source software.

"Inference is the engine of intelligence, powering every query, every agent and every application," said Jensen Huang, founder and CEO of NVIDIA. "With NVIDIA Dynamo, we've created the first-ever 'operating system' for AI factories. The rapid adoption across our ecosystem shows this next wave of agentic AI is here, and NVIDIA is powering it at global scale."

Dynamo 1.0 splits inference work across GPUs by adding smarter "traffic control" and the ability to move data between GPUs and lower-cost storage, reducing wasted work and easing memory limits. For agentic AI and long prompts, it can route requests to GPUs that already have the most relevant "short-term memory" from earlier steps, then offload that memory when it is not needed.

NVIDIA Inference Platform Gains Momentum

NVIDIA is accelerating the open source ecosystem by integrating Dynamo and NVIDIA TensorRT-LLM library optimizations into popular frameworks from providers such as LangChain, llm-d, LMCache, SGLang, vLLM and more. Core Dynamo building blocks like KVBM for smarter memory management, NVIDIA NIXL for fast GPU-to-GPU data movement and NVIDIA Grove for simplified scaling are also available as standalone modules. NVIDIA also contributes TensorRT-LLM CUDA kernels to the FlashInfer project so they can be natively integrated into open source frameworks.

The NVIDIA inference platform is supported across the AI ecosystem, including cloud service providers Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and OCI; NVIDIA Cloud Partners Alibaba Cloud, CoreWeave, Crusoe, DigitalOcean, Gcore, GMI Cloud, Lightning AI, Nebius, Nscale, Together AI, and Vultr; AI-native companies Cursor, Hebbia, and Perplexity; inference endpoint providers Baseten, Deep Infra, and Fireworks; and global enterprises AstraZeneca, BlackRock, ByteDance, Coupang, Instacart, Meituan, PayPal, Pinterest, Shopee, and SoftBank Corp.

"As AI moves from experimental pilots to continuous, large-scale production, the underlying infrastructure must be as dynamic as the models it supports," said Chen Goldberg, executive vice president of product and engineering at CoreWeave. "Supporting NVIDIA Dynamo allows us to offer a more seamless, resilient environment for deploying complex AI agents. This foundation provides the durability and high-performance orchestration required to move the industry's most ambitious agentic workloads into global production."

"Delivering reliable AI inference at scale isn't just about powerful GPUs, it's about the software that turns that performance into real customer outcomes," said Danila Shtan, chief technology officer of Nebius. "We value how NVIDIA's software stack, from Dynamo to TensorRT-LLM, brings deep optimization, predictable performance and faster time to deployment, helping us offer customers a simpler, higher-performance path to production AI."

"Delivering an intuitive, multimodal AI experience to hundreds of millions of users requires real-time intelligence at global scale," said Matt Madrigal, chief technology officer of Pinterest. "As a significant adopter in open source, we're committed to building scalable AI technologies. With NVIDIA Dynamo optimizing our deployment, we're expanding the seamless and personalized experiences we deliver, powered by high-performance AI infrastructure."

"AI natives require inference that can reliably and efficiently scale with their application," said Vipul Ved Prakash, cofounder and CEO of Together AI. "NVIDIA Dynamo 1.0, combined with cutting-edge inference research from Together AI, helps us deliver a high-performance stack to offer accelerated, cost-effective inference for large-scale production workloads."

About NVIDIA

NVIDIA is the world leader in AI and accelerated computing.

  • AI FactoryAgentic AIOpen SourceGPU Computing
News Disclaimer
  • Share