Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Enterprise AI

XConn & MemVerge Demo CXL Memory Pool for AI at SC25


 XConn & MemVerge Demo CXL Memory Pool for AI at SC25
  • by: Source Logo
  • |
  • November 18, 2025

The exponential growth of AI is creating a critical bottleneck in data center infrastructure: memory. XConn Technologies, a leader in interconnect solutions, and MemVerge, a leader in Big Memory software, have announced a joint demonstration of a Compute Express Link (CXL) memory pool designed to overcome this "AI memory wall." The live demo at Supercomputing 2025 (SC25) will showcase a solution that enables dynamic, rack-scale memory sharing to dramatically improve the performance and efficiency of large-scale AI inference.

Quick Intel

  • XConn and MemVerge are demoing a CXL memory pool for AI infrastructure at SC25.

  • The solution addresses the memory bottleneck limiting AI inference workloads.

  • It demonstrates over 5x performance gains compared to SSD-based caching.

  • The architecture allows memory to be dynamically shared across CPUs and GPUs.

  • It scales to hundreds of terabytes, reducing total cost of ownership (TCO).

  • The tech is built on XConn's Apollo switch and MemVerge's Gismo software.

Breaking the AI Memory Wall

As AI model sizes and workloads explode, memory bandwidth and latency have become the dominant constraints, lagging far behind computational power. This "memory wall" severely limits the performance of memory-intensive tasks like LLM inference and retrieval-augmented generation (RAG). The joint solution from XConn and MemVerge directly tackles this by creating a software-defined, elastic pool of CXL memory that can be shared across an entire rack. Gerry Fan, CEO of XConn Technologies, stated, "Our collaboration with MemVerge demonstrates that CXL memory pooling at 100 TiB and beyond is production-ready, not theoretical. This is the architecture that makes large-scale AI inference truly feasible."

A Scalable Architecture for Modern AI

The demonstration features a rack-scale solution built around XConn's Apollo hybrid CXL/PCIe switch and MemVerge's Gismo technology. It shows how AI inference workloads can offload massive Key-Value (KV) cache resources to the shared pool, dynamically allocating memory as needed between the prefill and decode stages of inference. This approach reduces the need for over-provisioning expensive memory in every server. Charles Fan, CEO of MemVerge, commented, "Memory has become the new frontier of AI infrastructure innovation. By using MemVerge GISMO with XConn's Apollo switch, we're showcasing software-defined, elastic CXL memory that delivers the performance and flexibility needed to power the next wave of agentic AI and hyperscale inference."

 

About XConn Technologies

XConn Technologies Holdings, Inc. (XConn) is the innovation leader in next-generation interconnect technology for high-performance computing and AI applications. The company is the industry's first to deliver a hybrid switch supporting both CXL and PCIe on a single chip. Privately funded, XConn is setting the benchmark for data center interconnect with scalability, flexibility, and performance. For more information visit: xconn-tech.com.

About MemVerge

MemVerge is a leading provider of AI memory software. MemVerge solutions help enterprises stand up long term memory for their agentic AI initiatives, and help AI data centers improve performance and efficiency by expanding and sharing memories between GPUs.

  • AIAI Infrastructure
News Disclaimer
  • Share