Myrtle.ai, a leader in machine learning inference acceleration, has integrated its VOLLO inference accelerator with Napatech’s NT400D1x series SmartNICs. This collaboration delivers ultra-low latency machine learning inference, achieving compute latencies below one microsecond, catering to industries requiring high-speed data processing.
Myrtle.ai’s VOLLO now supports Napatech NT400D1x SmartNICs.
Achieves ML inference latencies under one microsecond.
Supports models like LSTM, CNN, MLP, Random Forests, and more.
Targets financial trading, telecom, cybersecurity, and network management.
VOLLO compiler simplifies ML deployment on SmartNICs.
Enhances efficiency, security, and profitability in high-speed applications.
Myrtle.ai’s VOLLO inference accelerator, now compatible with Napatech’s NT400D1x SmartNICs, achieves industry-leading machine learning inference latencies of less than one microsecond. This capability allows inference to run directly next to the network, optimizing performance for time-sensitive applications. The integration supports a wide range of machine learning models, including LSTM, CNN, MLP, Random Forests, and Gradient Boosting decision trees, ensuring versatility across industries.
The VOLLO-Napatech integration is designed for sectors where ultra-low latency provides a competitive edge. In financial trading, microsecond inference enhances automated trading systems, improving profitability. In wireless telecommunications, it supports real-time data processing for enhanced network performance. Cybersecurity and network management applications benefit from faster threat detection and response, boosting security and operational efficiency.
The VOLLO compiler, available at vollo.myrtle.ai, streamlines the deployment of machine learning models on Napatech SmartNICs. “We recognized that the latency leader in the STAC® ML benchmarks could bring real value to our customers in the finance market as they increase their adoption of ML for auto trading,” said Jarrod J.S. Siket, Chief Product & Marketing Officer at Napatech. The compiler’s user-friendly design empowers developers to optimize models for ultra-low latency with ease.
This collaboration strengthens Myrtle.ai’s position in machine learning inference and Napatech’s SmartNIC portfolio. “We’re excited to be working with the world leader in SmartNIC sales to enable unprecedented low latencies for ML inference,” said Peter Baldwin, CEO of Myrtle.ai. The partnership addresses the growing demand for high-performance, low-latency solutions, positioning both companies as leaders in AI-driven innovation.
The integration of VOLLO with Napatech SmartNICs marks a significant advancement in machine learning inference, offering unmatched speed and efficiency. Businesses in finance, telecom, and cybersecurity can now leverage this technology to drive performance and innovation.
Myrtle.ai is an AI/ML software company that delivers world class inference accelerators on FPGA-based platforms from all the leading FPGA suppliers. With neural network expertise across the complete spectrum of ML networks, Myrtle has delivered accelerators for FinTech, Speech Processing, and Recommendation.