Mirantis, a provider of Kubernetes-native infrastructure for AI, has announced the validation of Supermicro’s modular server architecture with Mirantis k0rdent. This combination automates operations for sovereign AI and hybrid GPU cloud environments, delivering a verified Kubernetes-native stack for large-scale GPU-accelerated infrastructure.
The validation of Supermicro’s modular servers with Mirantis k0rdent creates a fully automated, Kubernetes-native platform tailored for AI workloads. This stack enables AI data center builders and neocloud providers to deploy and manage GPU-accelerated infrastructure efficiently at enterprise scale.
As AI adoption accelerates, infrastructure teams require full-stack solutions that streamline GPU deployment, optimize resource utilization, and enforce strict security and compliance standards. Mirantis k0rdent AI meets these demands through automated provisioning, comprehensive lifecycle management, and orchestration across the entire AI stack—from bare metal hardware to AI model deployment.
“AI infrastructure is becoming too complex to manage manually,” said Kevin Kamel, vice president of AI products, Mirantis. “With k0rdent, organizations can move from assembling hardware components to composing fully automated AI platforms – dramatically reducing deployment friction while maintaining control, performance, and compliance.”
The integration automatically discovers and provisions Supermicro nodes within Kubernetes environments, removing manual steps and accelerating time to production. By leveraging Supermicro hardware with GPU operator-driven acceleration and Mirantis k0rdent automation, enterprises can unify legacy workloads and next-generation AI applications on a single bare-metal foundation.
This approach supports sovereign AI initiatives by providing greater control over data and infrastructure, alongside hybrid cloud flexibility for organizations balancing on-premises and public cloud resources.
"As project architect for the Texas Tech University System, ThisWay Global partnered with Supermicro to deploy a transformative next-generation compute cluster powered by NVIDIA,” said Angela Hood, CEO and founder, ThisWay Global, a pioneering AI software innovator with R&D roots from the University of Cambridge ideaSpace. “This is not just infrastructure, it is the foundation for sovereign-scale AI, accelerated research, and enterprise innovation.”
“To fully harness its computational intensity and maximize ROI, Mirantis’ k0rdent and managed services, alongside Amalgamy.ai’s intelligent orchestration technologies, form the technology stack,” she continued. “Together, we are ensuring that the most modern computational cluster in Texas is also the most powerful, purpose-built deployment to drive breakthrough discovery, scalable AI, and the future of high-performance computing."
To learn more about deploying k0rdent AI on Supermicro GPU servers, read the blog post, “Supercharging Supermicro Bare Metal with Mirantis k0rdent AI” and solution brief.
About Mirantis
Mirantis delivers the fastest path to profitable, scalable GPU cloud infrastructure for neoclouds and enterprise AI factories, with full-stack AI infrastructure technology that removes complexity and streamlines operations across the AI lifecycle, from Metal-to-Model. Through k0rdent AI and strategic partnerships with NVIDIA, Mirantis enables organizations to transform GPU cloud economics with production-grade multi-tenancy, intelligent workload orchestration, and automated operations that maximize utilization and profitability. With more than 20 years experience delivering mission-critical open source cloud solutions, Mirantis provides the end-to-end automation, enterprise security and governance, and deep expertise in Kubernetes and GPU orchestration that organizations need to reduce time to market and efficiently scale cloud native, virtualized, and GPU-powered applications across any environment – on-premises, public cloud, hybrid, or edge.