Mirantis and Netris have announced a strategic integration that combines Mirantis Kubernetes orchestration with Netris network automation, enabling operators to deliver repeatable, multi-tenant AI clouds with hardware-enforced tenant isolation. The solution automates full-stack cluster provisioning—including data center networking across NVIDIA Spectrum-X Ethernet, Quantum-X InfiniBand, NVLink fabrics, and BlueField DPUs—eliminating manual bottlenecks and accelerating bare metal to revenue from months to days for neoclouds, telecom operators, and enterprise AI factories.
The integration addresses two major operational hurdles in building AI infrastructure: standardized Kubernetes cluster delivery and fragmented, manual network configuration. By making networking a native part of cluster provisioning, Mirantis and Netris eliminate post-deployment bolt-ons and manual processes that delay scale.
Mirantis provides composable Kubernetes-native infrastructure optimized for AI workloads, while Netris abstracts and automates the entire data center fabric—Ethernet, InfiniBand, NVLink, and DPU-based networking—delivering consistent, hardware-enforced multi-tenancy and isolation.
“In deploying infrastructure for AI, the complexity of the networking is one of the primary challenges,” said Shaun O’Meara, chief technology officer, Mirantis. “Being able to integrate Netris as a building block to manage the network stack enables dynamic network orchestration supporting full-stack multi-tenancy. This approach, combined with k0rdent AI, ensures that the GPU cloud experience is seamlessly integrated.”
“Every AI cloud operator hits the same ceiling – a network that is manually provisioned, fragmented, and doesn’t keep pace with compute,” said Alex Saroyan, CEO and co-founder, Netris. “Netris eliminates that bottleneck by abstracting and automating Ethernet, InfiniBand, NVLink, and BlueField DPUs fabrics. Working with Mirantis, that capability is now built into every Kubernetes cluster. Operators get the full stack without the manual work that has historically blocked scale.”
The combined solution enables:
This integration reflects Mirantis’ composable approach, allowing operators to select validated networking technologies while ensuring seamless, production-grade AI infrastructure deployment.
For more information or to request a demo, visit the Mirantis-Netris integration page.
About Mirantis
Mirantis delivers the fastest path to profitable, scalable GPU cloud infrastructure for neoclouds and enterprise AI factories, with full-stack AI infrastructure technology that removes complexity and streamlines operations across the AI lifecycle, from Metal-to-Model. Through k0rdent AI and strategic partnerships with NVIDIA, Mirantis enables organizations to transform GPU cloud economics with production-grade multi-tenancy, intelligent workload orchestration, and automated operations that maximize utilization and profitability. With more than 20 years delivering mission-critical open source cloud technologies, Mirantis provides the end-to-end automation, enterprise security and governance, and deep expertise in Kubernetes and GPU orchestration that organizations need to reduce time to market and efficiently scale cloud native, virtualized, and GPU-powered applications across any environment – on-premises, public cloud, hybrid, or edge.
About Netris
Netris is the leading provider of network automation and multi-tenancy for AI infrastructure. The Netris NAAM (Network Automation, Abstraction, and Multi-Tenancy) is the most widely deployed platform — trusted by high-growth neoclouds, sovereign AI cloud providers, AI factories, and leading AI platform providers. Netris provides native integrations across the complete AI infrastructure networking stack — Ethernet, InfiniBand, DPUs, and virtual and edge networking. Netris enables operators to get GPU cloud business operational in weeks instead of years, provision tenants immediately with hard network isolation configured automatically, maximize GPU utilization by dynamically reallocating capacity across tenants, ensure network stability, and future-proof AI infrastructure.