
In an era where AI is transforming enterprise operations, Mirantis has unveiled MOSK 25.2, its latest iteration of OpenStack for Kubernetes, designed to streamline cloud management and bolster support for demanding GPU-intensive AI applications alongside conventional business workloads. This release comes at a critical time as organizations grapple with the need for scalable, secure infrastructure that maintains data sovereignty and operational efficiency.
MOSK 25.2 addresses the escalating demands of AI adoption by enabling organizations to scale infrastructure for high-throughput training while ensuring data control and efficient orchestration. As highlighted by Deloitte's insights on infrastructure evolution, enterprises are prioritizing solutions that support data locality and performance in hybrid, sovereign, and private clouds. This version of MOSK introduces capabilities tailored for compute, networking, and storage management, allowing seamless handling of both AI-driven and traditional applications.
A standout advancement is the support for disconnected operations, permitting entire OpenStack clouds to function without internet access. This is particularly valuable for industries where security protocols mandate scanning and approval of all datacenter artifacts, such as government and defense. By facilitating alignment with upstream innovations while retaining full data control, MOSK 25.2 becomes essential for AI model training in sensitive environments.
Networking receives significant upgrades in MOSK 25.2, promoting smarter and more resilient infrastructure. The inclusion of Open Virtual Network (OVN) 24.03 brings performance boosts and the latest security updates, offering a validated migration path from the legacy Open vSwitch (OvS) to a more contemporary backend suitable for large-scale OpenStack deployments. For those seeking alternatives, OpenSDN 24.1 provides a refreshed codebase with broader IPv6 support, enhancing compatibility in modern networks.
Scale-out networking features full Layer 3 capabilities on bare metal, enabling expansion across racks without relying on VLAN stretching. Coupled with proactive health monitoring—including connectivity checks and early alerts for switch or routing anomalies—this ensures uninterrupted operations in AI-ready private clouds. These enhancements reduce downtime and optimize resource allocation for Kubernetes-orchestrated environments.
For hybrid deployments combining virtual machines and bare metal, MOSK 25.2 simplifies AI cloud management. It includes recovery mechanisms for bare-metal GPU servers even during network disruptions, ensuring continuous availability for intensive training tasks. Additionally, these servers can be effortlessly connected to project-specific networks alongside VMs, facilitating high-performance workflows without compromising security.
The platform maintains its strength in on-premises private clouds, supporting both cloud-native and legacy workloads through automated lifecycle management—from bare-metal provisioning to software configuration. Centralized tools for logging, monitoring, and alerting further empower enterprises to maintain reliability and sovereignty over application data in any deployment scenario.
As organizations navigate the complexities of AI infrastructure, MOSK 25.2 positions Mirantis as a leader in delivering Kubernetes-native solutions that balance innovation with control. This release not only future-proofs private and sovereign clouds but also empowers businesses to deploy scalable, secure environments that drive AI initiatives forward without operational hurdles.
Mirantis delivers the fastest path to enterprise AI at scale, with full-stack AI infrastructure technology that removes GPU infrastructure complexity and streamlines operations across the AI lifecycle, from Metal-to-Model. Today, all infrastructure is AI infrastructure, and Mirantis provides the end-to-end automation, enterprise security and governance, and deep expertise in Kubernetes orchestration that organizations need to reduce time to market and efficiently scale cloud native, virtualized, and GPU-powered applications across any environment – on-premises, public cloud, hybrid, or edge.