Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Enterprise AI

DDN AI400X3 Breaks Records in MLPerf Storage v2.0 Benchmarks


DDN AI400X3 Breaks Records in MLPerf Storage v2.0 Benchmarks
  • by: Source Logo
  • |
  • August 5, 2025

DDN, a global leader in AI and data intelligence solutions, announced on August 4, 2025, that its AI400X3 storage appliance delivered record-breaking results in the MLPerf Storage v2.0 benchmarks. Powered by the EXAScaler® parallel file system, the compact 2U AI400X3 sets a new standard for performance density, supporting up to 640 simulated H100 GPUs and achieving 120+ GB/s throughput, enabling enterprises to scale AI workloads efficiently while minimizing power and space constraints.

Quick Intel

  • Benchmark: MLPerf Storage v2.0, testing single and multi-node AI workloads.

  • Single-Node Results: 30.6 GB/s read, 15.3 GB/s write, serving 52 (Cosmoflow) and 208 (Resnet50) simulated H100 GPUs.

  • Multi-Node Results: 120+ GB/s read for Unet3D, supporting 640 H100 GPUs (Resnet50) and 135 (Cosmoflow).

  • Efficiency: 2U, 2400-watt appliance with Llama3-8b checkpoint load/save times of 3.4/7.7 seconds.

  • Partnerships: Trusted by NVIDIA since 2016 for internal AI clusters.

  • Applications: Genomics, medical imaging, and complex vision models.

Unmatched Performance and Efficiency

The AI400X3’s standout MLPerf Storage v2.0 results highlight its ability to eliminate data bottlenecks in AI training. In single-node tests, it achieved 30.6 GB/s read and 15.3 GB/s write throughput, serving 52 (Cosmoflow) and 208 (Resnet50) simulated H100 GPUs. In multi-node tests, it delivered 120+ GB/s read throughput for Unet3D and supported up to 640 H100 GPUs for Resnet50, a 2x improvement over prior results. “These MLPerf results prove that DDN can keep pace with the world’s most advanced GPUs,” said Sven Oehme, CTO at DDN.

The platform’s compact 2U design and 2400-watt power efficiency address datacenter constraints, reducing operational costs while maintaining high performance. Its ability to handle frequent checkpointing—critical for large-scale AI training—ensures resilience without compromising speed, with Llama3-8b checkpoint load/save times of 3.4 and 7.7 seconds.

Industry Context and Competitive Edge

The MLPerf Storage v2.0 benchmarks, conducted by MLCommons, saw participation from 26 organizations, including DDN, HPE, and Nutanix, reflecting the growing importance of storage in AI. Unlike competitors like Hammerspace, whose claims of supporting 594 H100 GPUs were unverified, DDN’s results are MLCommons-validated, ensuring transparency. The AI400X3’s EXAScaler file system and parallel architecture outperform rivals, delivering 1000% better efficiency per node compared to other on-premises solutions.

Strategic Impact

DDN’s AI400X3 supports diverse AI applications, from genomics to medical imaging, and powers NVIDIA’s internal AI clusters. Its scalability suits both small-scale deployments and hyperscale data centers, aligning with Japan’s AI infrastructure push and global trends, where the AI storage market is projected to grow at a 28% CAGR through 2030.

 

About DDN

DDN is the world’s leading AI and data intelligence company, empowering organizations to maximize the value of their data with end-to-end HPC and AI-focused solutions. Its customers range from the largest global enterprises and AI hyperscalers to cutting-edge research centers, all leveraging DDN’s proven data intelligence platform for scalable, secure, and high-performance AI deployments that drive 10x returns.

  • DDNAI400X3ML Perf StorageAI InfrastructureData Intelligence
News Disclaimer
  • Share