Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Enterprise AI

Mirantis OpenStack for Kubernetes 26.1 Adds AI Assistant


Mirantis OpenStack for Kubernetes 26.1 Adds AI Assistant
  • by: Source Logo
  • |
  • March 27, 2026

Mirantis has released Mirantis OpenStack for Kubernetes (MOSK) 26.1, a major update that enhances efficiency and performance for cloud providers, neoclouds, and enterprises running OpenStack clouds. The release introduces an AI assistant for documentation and operational guidance while adding improvements in networking, security, compliance, and reliability.

Quick Intel

  • MOSK 26.1 introduces an AI assistant that provides accurate answers from technical documentation and connected knowledge sources for faster troubleshooting and operations.
  • Operators can now track energy consumption and correlate workloads with power costs, especially useful for GPU clusters and AI factories.
  • Networking enhancements include expanded OVN support with VPNaaS, QoS for north-south traffic, and SR-IOV for latency-sensitive workloads.
  • Granular control over Instance HA (Masakari) allows prioritization of critical workloads during failover.
  • Cryptographically signed SBOMs in CycloneDX format improve supply chain security and compliance for regulated industries.
  • Management cluster resilience is strengthened with external backup storage, encrypted backups, and automated backup workflows.

The new AI assistant in MOSK 26.1 allows cloud operators to ask task-oriented questions and receive targeted guidance instead of manually searching documentation. This reduces time spent on procedures and speeds up troubleshooting for high-performance workloads.

Energy monitoring capabilities now enable tracking of consumption and correlation with power costs, delivering valuable insights for energy-intensive environments such as GPU clusters and AI factories.

Enhanced Networking Capabilities

MOSK 26.1 expands Open Virtual Network (OVN) support with several key features:

  • VPNaaS for secure connectivity between OpenStack clouds, on-premises environments, or different MOSK clusters.
  • Quality of Service (QoS) for north-south traffic to enforce predictable network behavior for external-facing workloads.
  • SR-IOV support to improve performance for latency-sensitive applications by providing virtual machines with direct hardware access.

Additional Operational Improvements

  • Granular restriction of workloads’ access to Instance HA (OpenStack Masakari) enables alignment of failover policies with business priorities and infrastructure capacity, allowing prioritization of critical workloads.
  • Cryptographically signed Software Bills of Materials (SBOMs) in CycloneDX format provide transparency into software components, supporting vulnerability management, license compliance, and supply chain assurance — especially important for public sector, healthcare, financial services, and telecommunications.
  • Improved resilience for MOSK management clusters includes external backup storage support, encrypted backups, manual and scheduled workflows, and automatic backups before cluster updates.

These enhancements help cloud operators run more reliable, secure, and efficient OpenStack environments while supporting demanding AI and high-performance computing workloads.

About Mirantis

Mirantis delivers the fastest path to profitable, scalable GPU cloud infrastructure for neoclouds and enterprise AI factories, with full-stack AI infrastructure technology that removes complexity and streamlines operations across the AI lifecycle, from Metal-to-Model. Through k0rdent AI and strategic partnerships, Mirantis enables organizations to transform GPU cloud economics with production-grade multi-tenancy, intelligent workload orchestration, and automated operations that maximize utilization and profitability. With more than 20 years delivering mission-critical open source cloud solutions, Mirantis provides the end-to-end automation, enterprise security and governance, and deep expertise in Kubernetes and GPU orchestration that organizations need to reduce time to market and efficiently scale cloud native, virtualized, and GPU-powered applications across any environment – on-premises, public cloud, hybrid, or edge.

  • AI AssistantCloud InfrastructureGPUI
News Disclaimer
  • Share