
Axelera AI, a leading provider of AI hardware acceleration, announced the Metis® M.2 Max, a next-generation AI accelerator card in the M.2 form factor, designed for compute-intensive edge AI applications. Unveiled in Eindhoven, Netherlands, the Metis M.2 Max doubles performance for large language models (LLMs) and vision-language models (VLMs), enhancing capabilities for industries like industrial manufacturing, retail, security, healthcare, and public safety. It will begin shipping in Q4 2025 via the Axelera AI Webstore and channel partners.
Announcement: Metis M.2 Max launched on September 8, 2025, for edge AI inference.
Performance: 33% uplift in CNNs, 2x token/second for LLMs and VLMs, 6.5W average power.
Features: Up to 16GB memory, slimmer profile (27% height reduction), advanced thermal management, and enhanced security (Root of Trust, secure boot).
Applications: Industrial manufacturing, retail, security, healthcare, public safety.
Availability: Ships Q4 2025, with standard (-20°C to +70°C) and extended (-40°C to +85°C) temperature variants.
Price: Starting at $195.46 for the original Metis M.2; pricing for Max TBD.
The Metis M.2 Max, a 2280 M-key module, builds on the original Metis M.2 with:
Double Memory Bandwidth: Supports up to 16GB of memory for demanding models like LLMs and vision transformers.
Slimmer Design: 27% height reduction, available with or without a low-profile heatsink for versatile configurations.
Thermal Management: Onboard power probe adjusts performance for power- or thermal-constrained environments.
Security Features: Firmware integrity via Root of Trust and secure boot, ensuring only authenticated Axelera firmware runs.
Performance Boost: Delivers 33% higher performance for convolutional neural networks (CNNs) and doubles token/second for LLMs and VLMs at 6.5W.
“Metis M.2 Max enables customers to deploy transformative edge AI applications at scale,” said Fabrizio del Maffeo, CEO of Axelera AI. “We’re setting the price-performance benchmark for the AI accelerator market.”
The Voyager SDK simplifies integration, supporting proprietary and industry-standard models like YOLOv5, YOLOv8, and LLMs (e.g., EuroLLM-1.7b, Gemma3N-e2b). Developers can deploy AI pipelines for multi-camera security, quality inspection, and generative AI with minimal effort, though some X posts note challenges with custom model exports and documentation.
The Metis M.2 Max leverages Axelera’s Digital In-Memory Computing (D-IMC) architecture, achieving up to 214 TOPS at 15 TOPS/W efficiency. Compared to competitors like NVIDIA’s Jetson Orin or Hailo, it offers superior performance-per-watt for edge applications. The edge AI market is projected to reach $110B by 2030, driven by demand for low-power, high-performance solutions. Posts on X praise Axelera’s European innovation but highlight integration hurdles with non-Intel platforms.
Headquartered in Eindhoven, Netherlands, Axelera AI is a leader in edge AI inference, with over 180 employees across 18 countries. Its Metis AI Platform combines hardware and the Voyager SDK for computer vision and generative AI, trusted by partners like Advantech and Fogsphere.