XConn Technologies and MemVerge recently showcased a groundbreaking demonstration of Compute Express Link® (CXL®) memory pooling for AI workload scale-up at the 2025 OCP Global Summit in San Jose, California. The joint demo highlighted a commercial CXL memory pool of up to 100 TiB as a breakthrough solution to the AI workload memory wall, delivering both significant performance improvements and total cost of ownership (TCO) benefits.
Demonstrated a 100 TiB CXL memory pool addressing AI workload memory wall challenges.
Integration of XConn Apollo switch and MemVerge Gismo technology with NVIDIA Dynamo and NIXL software.
Showed over 5x performance improvement for AI inference workloads compared to SSD solutions.
Features XConn’s hybrid CXL/PCIe Apollo switch and MemVerge’s Memory Machine X software for scalability.
Provides enterprises with breakthrough efficiency, scalability, and cost benefits for AI inference and training.
Presentation and demo held at the OCP Innovation Village Booth 504 during the OCP Global Summit 2025.
As AI applications increase in scale and complexity, they face the “memory wall” challenge where traditional architectures limit memory capacity and bandwidth. CXL memory pooling enables dynamic, low-latency sharing of massive memory across CPUs and accelerators, overcoming these limits and unlocking new levels of AI performance.
The XConn Apollo switch, the industry’s first hybrid CXL/PCIe switch, combined with MemVerge’s Memory Machine X software, enables enterprises to scale large AI models efficiently. The joint demo demonstrated over a 5x speed boost in AI inference workloads versus standard SSD-based caching, showcasing how memory pooling can both accelerate compute and reduce operational costs.
Gerry Fan, CEO of XConn Technologies, stated, “As AI workloads hit the memory wall issues, CXL memory pool is the only viable memory scale up solution for today and the near future. It not only dramatically boosts AI workload performance but also provides significant TCO benefits.”
Charles Fan, CEO and co-founder of MemVerge, added, “By pairing GISMO with the XConn Apollo switch, we are showcasing how software-defined CXL memory can deliver the elasticity and efficiency needed for AI and HPC. This collaboration extends the possibilities of CXL 3.1 to help organizations run larger models faster and with greater resource utilization.”
Beyond this demonstration, 100 TiB commercial CXL memory pools are available in 2025, with larger deployments anticipated in 2026 and beyond. Combined with ultra-low latency switch fabrics and intelligent memory software, this new class of memory infrastructure is set to power advancements in generative AI, real-time analytics, and in-memory database workloads.
XConn Technologies Holdings, Inc. is an innovation leader in next-generation interconnect technology for high-performance computing and AI applications. The company delivers the first hybrid CXL/PCIe switch on a single chip, setting benchmarks in scalability, flexibility, and performance.
MemVerge leads in AI memory software, enabling enterprises to expand and share memory between GPUs efficiently. MemVerge software improves performance and scalability for agentic AI workloads and data centers.