Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Enterprise AI

Mirantis & Netris Unify Kubernetes and Networking for AI Clouds


Mirantis & Netris Unify Kubernetes and Networking for AI Clouds
  • by: Source Logo
  • |
  • March 12, 2026

Mirantis and Netris have announced a strategic integration that combines Mirantis Kubernetes orchestration with Netris network automation, enabling operators to deliver repeatable, multi-tenant AI clouds with hardware-enforced tenant isolation. The solution automates full-stack cluster provisioning—including data center networking across NVIDIA Spectrum-X Ethernet, Quantum-X InfiniBand, NVLink fabrics, and BlueField DPUs—eliminating manual bottlenecks and accelerating bare metal to revenue from months to days for neoclouds, telecom operators, and enterprise AI factories.

Quick Intel

  • Mirantis k0rdent AI orchestrates Kubernetes cluster lifecycle while Netris automates network provisioning, abstraction, and multi-tenancy at the hardware layer.
  • Integration delivers automated, push-button deployment of GPU clusters with predictable AI performance, hardware-enforced isolation, and higher tenant density.
  • Supports NVIDIA BlueField DPUs for tenant networking, offloading CPU cores and improving GPU utilization and operating efficiency.
  • Enables east-west traffic automation (InfiniBand, RoCE) and north-south ingress/egress for stable, scalable AI workloads.
  • Hardware-level isolation, fault tolerance, and data safety optimize for regulated, sovereign, and high-security environments.
  • Operators gain greater ROI through faster rollout, maximized resource utilization, reduced manual effort, and production-grade multi-tenancy.

Unified Orchestration and Networking for AI Clouds

The integration addresses two major operational hurdles in building AI infrastructure: standardized Kubernetes cluster delivery and fragmented, manual network configuration. By making networking a native part of cluster provisioning, Mirantis and Netris eliminate post-deployment bolt-ons and manual processes that delay scale.

Mirantis provides composable Kubernetes-native infrastructure optimized for AI workloads, while Netris abstracts and automates the entire data center fabric—Ethernet, InfiniBand, NVLink, and DPU-based networking—delivering consistent, hardware-enforced multi-tenancy and isolation.

“In deploying infrastructure for AI, the complexity of the networking is one of the primary challenges,” said Shaun O’Meara, chief technology officer, Mirantis. “Being able to integrate Netris as a building block to manage the network stack enables dynamic network orchestration supporting full-stack multi-tenancy. This approach, combined with k0rdent AI, ensures that the GPU cloud experience is seamlessly integrated.”

“Every AI cloud operator hits the same ceiling – a network that is manually provisioned, fragmented, and doesn’t keep pace with compute,” said Alex Saroyan, CEO and co-founder, Netris. “Netris eliminates that bottleneck by abstracting and automating Ethernet, InfiniBand, NVLink, and BlueField DPUs fabrics. Working with Mirantis, that capability is now built into every Kubernetes cluster. Operators get the full stack without the manual work that has historically blocked scale.”

Key Capabilities Delivered

The combined solution enables:

  • Fully automated provisioning of Kubernetes clusters with integrated networking across NVIDIA fabrics.
  • Hardware-enforced tenant isolation at switch and DPU levels for improved security, density, and efficiency.
  • Dynamic resource reallocation to maximize GPU utilization across tenants.
  • Predictable performance for AI workloads with automated east-west and north-south traffic handling.
  • Reduced operational overhead and faster time-to-revenue for neoclouds, telecoms, and enterprises.

This integration reflects Mirantis’ composable approach, allowing operators to select validated networking technologies while ensuring seamless, production-grade AI infrastructure deployment.

For more information or to request a demo, visit the Mirantis-Netris integration page.

About Mirantis

Mirantis delivers the fastest path to profitable, scalable GPU cloud infrastructure for neoclouds and enterprise AI factories, with full-stack AI infrastructure technology that removes complexity and streamlines operations across the AI lifecycle, from Metal-to-Model. Through k0rdent AI and strategic partnerships with NVIDIA, Mirantis enables organizations to transform GPU cloud economics with production-grade multi-tenancy, intelligent workload orchestration, and automated operations that maximize utilization and profitability. With more than 20 years delivering mission-critical open source cloud technologies, Mirantis provides the end-to-end automation, enterprise security and governance, and deep expertise in Kubernetes and GPU orchestration that organizations need to reduce time to market and efficiently scale cloud native, virtualized, and GPU-powered applications across any environment – on-premises, public cloud, hybrid, or edge.

About Netris

Netris is the leading provider of network automation and multi-tenancy for AI infrastructure. The Netris NAAM (Network Automation, Abstraction, and Multi-Tenancy) is the most widely deployed platform — trusted by high-growth neoclouds, sovereign AI cloud providers, AI factories, and leading AI platform providers. Netris provides native integrations across the complete AI infrastructure networking stack — Ethernet, InfiniBand, DPUs, and virtual and edge networking. Netris enables operators to get GPU cloud business operational in weeks instead of years, provision tenants immediately with hard network isolation configured automatically, maximize GPU utilization by dynamically reallocating capacity across tenants, ensure network stability, and future-proof AI infrastructure.

  • AI InfrastructureNetwork AutomationMulti Tenancy
News Disclaimer
  • Share