
NeuReality, a leader in AI infrastructure, has revealed its next-generation 1.6 Tbps NR2 AI-SuperNIC at the AI Infrastructure Summit, marking a significant advancement in scale-out AI networking. Designed to address the growing demands of AI inference and training, the NR2 AI-SuperNIC supports Ultra Ethernet Consortium (UEC) specifications and introduces in-network computing capabilities, setting a new standard for performance and scalability in AI data centers.
NeuReality launches 1.6 Tbps NR2 AI-SuperNIC for AI infrastructure.
Supports Ultra Ethernet Consortium (UEC) 1.0 for low-latency networking.
Enhances scalability for AI training and inference workloads.
Features in-network computing to optimize GPU and XPU performance.
NR2 AI-SuperNIC available to select customers in H2 2026.
NR1 solution receives UEC-compliant software upgrade.
The NR2 AI-SuperNIC builds on the foundation of NeuReality’s NR1 AI-NIC, delivering a wire-speed of 1.6 Tbps and integrating advanced in-network computing capabilities. This innovation leverages an upgraded AI-Hypervisor and DSP processors to support scalable AI training and inference, addressing bottlenecks in high-performance networking. By optimizing Ethernet throughput and latency, the NR2 AI-SuperNIC ensures efficient data movement across AI clusters, making it ideal for single-rack setups to large-scale AI factories.
With full support for UEC 1.0 specifications, the NR2 AI-SuperNIC ensures ultra-low latency and end-to-end interoperability in AI inference clusters. Alongside the NR1’s existing TCP and ROCEv2 support, the UEC Ethernet compatibility enhances the NR2’s ability to handle multimodal AI data, including images, audio, and video. A software upgrade for the NR1 solution also brings UEC 1.0 compliance, enabling existing systems to benefit from improved networking performance.
As AI models grow in complexity, traditional infrastructure struggles with cost, scalability, and efficiency. NeuReality’s NR2 AI-SuperNIC tackles these issues by offloading communication overhead and optimizing GPU and XPU utilization. “Our mission is to drive the architectural shift the AI infrastructure industry needs,” said Moshe Tanach, CEO at NeuReality. “From silicon to systems, our NR1 AI-CPU and the NR2 AI-SuperNIC embody the performance, openness and scalability tomorrow’s AI workloads demand.”
The NR2 AI-SuperNIC is designed for versatility, deployable as a standalone NIC card, co-packaged with GPUs, or on micro-server boards. This flexibility supports a range of AI workloads, from generative AI to reasoning and chain-of-thought processes, which demand high compute and data transfer rates. With availability to select customers in the second half of 2026 and mass production in 2027, the NR2 sets a new benchmark for AI infrastructure efficiency.
NeuReality’s broader vision includes the NR2 AI-CPU, which will support up to 128 cores based on Arm Neoverse Compute Subsystems V3. Optimized for real-time model coordination, token streaming, and KV-cache optimizations, the NR2 AI-CPU complements the AI-SuperNIC to deliver a modular, high-performance solution. This approach aims to eliminate structural inefficiencies in traditional CPU-GPU-NIC architectures, paving the way for cost-effective and energy-efficient AI data centers.
NeuReality’s advancements signal a transformative shift in AI infrastructure, prioritizing performance, scalability, and energy efficiency. By addressing the limitations of legacy systems, the NR2 AI-SuperNIC and AI-CPU solutions position NeuReality as a key player in enabling the next generation of AI applications. Join NeuReality at the AI Infra Summit 2025 in Santa Clara to explore live demos and witness the future of AI infrastructure.
Founded in 2019, NeuReality is a pioneer in purpose-built AI inferencing architecture powered by the NR1® Chip – the first AI-CPU for inference orchestration. Based on an open, standards-based approach, the NR1 is fully compatible with any AI accelerator. NeuReality’s mission is to make AI accessible and ubiquitous by lowering barriers associated with prohibitive cost, power consumption, and complexity, and to scale AI inference adoption through its disruptive technology. It employs 80 people across facilities in Israel, Poland, and the U.S.