
Exostellar, a leader in self-managed AI infrastructure orchestration, has announced support for AMD Instinct GPUs, integrating its GPU-agnostic platform with AMD’s high-performance solutions to enhance enterprise AI infrastructure efficiency. This collaboration addresses the growing demand for transparent, flexible, and cost-effective compute ecosystems.
Exostellar’s partnership with AMD combines its GPU-agnostic orchestration platform with AMD Instinct GPUs, addressing enterprise needs for transparency and performance. The platform decouples applications from hardware, enabling flexible scheduling across heterogeneous environments. Anush Elangovan, Vice President of AI Software at AMD, stated, “Open ecosystems are key to building next-generation AI infrastructure. Together with Exostellar, we’re enabling advanced capabilities like topology-aware scheduling and resource bin-packing on AMD Instinct GPUs.”
Exostellar’s platform delivers centralized visibility, dynamic GPU sizing, and optimized compute utilization for infrastructure teams. Its fine-grained GPU slicing, paired with the high-bandwidth AMD Instinct GPU architecture, ensures efficient resource allocation. This results in reduced queuing times and faster experimentation cycles for AI developers, enhancing productivity and innovation.
The collaboration leverages AMD Instinct GPUs’ advanced memory capabilities, such as the MI355X’s 288 GB HBM3e and 8 TB/s bandwidth, to support larger model deployments with fewer nodes. This reduces infrastructure costs and accelerates time-to-value. Tony Shakib, Chairman and CEO of Exostellar, noted, “Our goal has always been to help customers get the most out of their AMD investments. With this collaboration, Exostellar extends that mission—because it’s not just about raw compute, but about next-level orchestration, utilization, and ROI.”
Exostellar’s platform stands out with its superior UI/UX, workload-aware GPU slicing, and dynamic scheduling tailored for AMD Instinct GPUs. Unlike other Kubernetes solutions, it offers precise resource right-sizing and vendor-agnostic orchestration, providing unique features unavailable in open-source alternatives. This positions Exostellar as a next-generation orchestrator, aligning with AMD’s vision for open, efficient AI infrastructure.
Exostellar’s integration with AMD Instinct GPUs marks a significant step toward flexible, high-performance AI infrastructure. By combining advanced orchestration with cutting-edge GPU technology, Exostellar empowers enterprises to achieve greater efficiency, lower costs, and faster AI deployment, driving innovation in the compute ecosystem.
Exostellar is a leading innovator in autonomous compute orchestration and cloud optimization, headquartered in Santa Clara, California. The company’s heterogeneous xPU orchestration platform is designed to be fully GPU-agnostic, intelligently decoupling applications from underlying hardware to enable flexible scheduling across mixed infrastructure. Exostellar serves enterprises seeking transparent and efficient compute ecosystems, delivering centralized visibility, dynamic resource sizing, and optimized utilization to reduce costs and accelerate AI workloads.