Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, has introduced its custom Ultra Accelerator Link (UALink) scale-up offering, designed to optimize AI infrastructure with high-performance, low-latency interconnects. This solution strengthens Marvell’s portfolio for next-generation AI compute platforms.
Marvell launches custom UALink scale-up solution for AI infrastructure.
Features 224G SerDes, UALink Controller, and low-latency Switch Core.
Supports thousands of AI accelerators in a single deployment.
Enables low-latency, high-efficiency communication for hyperscalers.
Part of the open-standards UALink Consortium with AMD support.
Enhances rack-scale AI with advanced packaging technologies.
Marvell’s UALink scale-up offering includes interoperable IPs such as 224G SerDes, UALink Physical Layer IP, configurable UALink Controller IP, scalable low-latency Switch Core, and advanced packaging options like co-packaged optics. “We are pleased to introduce our new custom UALink offering to enable the next generation of AI scale-up systems,” said Nick Kucharewski, senior vice president and general manager, Cloud Platform Business Unit at Marvell. This enables direct, low-latency communication for hundreds to thousands of AI accelerators in a single deployment.
The UALink solution addresses hyperscalers’ challenges in scaling AI infrastructure while maintaining performance. Paired with Marvell’s custom silicon, it supports custom accelerators and switches, optimizing rack-scale AI performance. “We are excited to see UALink custom solutions from Marvell, which are essential to the future of AI,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, AMD. The open-standards approach ensures interoperability and flexible switch topologies.
As a member of the UALink Consortium, Marvell contributes to developing open standards for accelerator connectivity. The consortium, supported by industry leaders like AMD, Intel, and Google, aims to standardize high-speed, low-latency interconnects for AI and HPC. The UALink 1.0 specification, expected in Q3 2024, will support up to 1,024 accelerators, driving innovation in AI infrastructure.
Marvell’s solution targets the growing $200 billion AI infrastructure market, enabling efficient scaling for data centers handling large language models and HPC workloads. With over 25 years of partnerships with global tech leaders, Marvell’s UALink offering enhances performance for AI training and inference, reducing latency and power consumption. This positions Marvell as a key player in the evolving AI ecosystem.
By delivering a standards-based, high-performance interconnect solution, Marvell’s UALink offering empowers hyperscalers to build scalable, efficient AI infrastructure. This innovation supports the growing demands of AI workloads, fostering breakthroughs in next-generation applications and solidifying Marvell’s leadership in data infrastructure.
To deliver the data infrastructure technology that connects the world, we're building solutions on the most powerful foundation: our partnerships with our customers. Trusted by the world's leading technology companies for over 25 years, we move, store, process and secure the world's data with semiconductor solutions designed for our customers' current needs and future ambitions. Through a process of deep collaboration and transparency, we're ultimately changing the way tomorrow's enterprise, cloud, automotive, and carrier architectures transform—for the better.