
Supermicro has launched its H14 generation AI solutions, integrating AMD Instinct MI350 Series GPUs and AMD ROCm Software for breakthrough inference performance and power efficiency. These liquid-cooled and air-cooled systems are designed to scale AI workloads while reducing data center costs.
Supermicro H14 solutions feature AMD Instinct MI350 Series GPUs.
Offers 2.304TB HBM3e memory per 8-GPU server, 288GB per GPU.
Delivers 1.8x petaflops FP16/FP8 and 8TB/s bandwidth vs. MI325X.
Liquid-cooled 4U system cuts power consumption by up to 40%.
Supports AI training, inference, and HPC with FP6/FP4 data types.
Built on AMD’s 4th Gen CDNA architecture for scalability.
Supermicro’s H14 GPU solutions, powered by AMD’s 4th Gen CDNA architecture, deliver optimized performance for AI training and high-speed inference. “Supermicro continues to lead the industry with the most experience in delivering high-performance systems designed for AI and HPC applications,” said Charles Liang, president and CEO of Supermicro. With 2.304TB HBM3e memory per 8-GPU server and 1.8x petaflops, these systems process large AI models faster and more efficiently.
The 4U liquid-cooled system, featuring Supermicro’s enhanced Direct Liquid Cooling (DLC) architecture, reduces power consumption by up to 40%, enabling higher performance per rack. The 8U air-cooled option supports less dense environments. “By combining these GPUs with Supermicro’s proven platforms, their customers can deploy fully integrated, air- or liquid-cooled racks,” said Dr. Lisa Su, CEO and Chair, AMD, highlighting flexibility for AI deployments.
The MI350 Series GPUs introduce FP6 and FP4 data types, boosting AI performance for large models. With 288GB HBM3e per GPU—1.5x more than prior generations—and 8TB/s bandwidth, the systems maximize computational throughput. “AI models aren’t just increasing in size; they’re demanding faster, more efficient infrastructure,” said Paul Schell, Industry Analyst at ABI Research, noting Supermicro’s alignment with market needs.
Built on Supermicro’s Data Center Building Block Solutions, H14 systems support cloud providers and enterprises in the $200 billion AI infrastructure market. Serving clients like NVIDIA and Meta, Supermicro’s solutions enable scalable AI training and inference for applications in healthcare, finance, and scientific research, driving efficiency and innovation.
Supermicro’s H14 platforms, with AMD Instinct MI350 GPUs, offer unmatched performance and energy efficiency, positioning the company as a leader in AI infrastructure. By providing flexible, high-density options, Supermicro empowers organizations to build cost-effective, scalable data centers for the next wave of AI advancements.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions manufacturer with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.