Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Enterprise AI

Spectro Cloud and WEKA Partner to Bring Data Closer to AI Workloads


Spectro Cloud and WEKA Partner to Bring Data Closer to AI Workloads
  • by: Source Logo
  • |
  • March 18, 2026

At NVIDIA GTC, Spectro Cloud and WEKA announced a partnership to simplify and accelerate the deployment of the NVIDIA AI Data Platform, a next-generation reference architecture that integrates NVIDIA-accelerated computing, networking, and AI-ready storage to deliver high-throughput, low-latency data pipelines for AI workloads.

Quick Intel

  • Spectro Cloud and WEKA announced a partnership to simplify and accelerate deployment of the NVIDIA AI Data Platform reference architecture.

  • The collaboration combines Spectro Cloud's PaletteAI platform for automated AI infrastructure with WEKA's NeuralMesh intelligent storage solution.

  • One-click AI Data Platform deployment with PaletteAI enables end-to-end provisioning and configuration of validated stacks incorporating NVIDIA BlueField DPUs, Spectrum-X networking, and AI Enterprise software.

  • NeuralMesh delivers ultra-low-latency, high-throughput data access to keep GPUs continuously fed for training and inference workloads.

  • The integrated solution supports RAG, vector search, multimodal ingestion, distributed training, and long-context inference at exascale performance.

  • The validated reference architecture is available now through both companies' sales teams and authorized partners.

Spectro Cloud and WEKA Partner for AI Data Platform Deployment

"AI should deliver business impact, not infrastructure complexity," said Tenry Fu, CEO and co-founder, Spectro Cloud. "Partnering with WEKA lets us pair PaletteAI's orchestration with an AI-native data platform inside the NVIDIA AI Data Platform reference design, giving enterprises a faster, safer path to production."

The NVIDIA AI Data Platform defines how to tightly integrate compute, networking, and storage so GPUs are never starved of data, unlocking near-real-time insights and improving AI agent accuracy. The reference design leverages NVIDIA BlueField DPUs for accelerated networking, storage, and security offload. It integrates with NVIDIA Spectrum-X Ethernet networking for predictable, lossless east-west traffic, and NVIDIA AI Enterprise software, including NVIDIA NIM and NVIDIA NeMo microservices to power inference and model operations at enterprise scale.

Key Capabilities of the Integrated Solution

Today's announcement operationalizes that architecture with turnkey integration, automated deployment, and AI-native data performance from Spectro Cloud and WEKA.

PaletteAI enables one-click AI Data Platform deployment using a declarative, cloud-native approach to provision and configure AI data platform-aligned stacks end to end. Customers can now deploy a validated, end-to-end AI data platform stack that incorporates NVIDIA BlueField DPUs, NVIDIA Spectrum-X Ethernet networking, and NVIDIA AI Enterprise software, with configuration and lifecycle automation handled by PaletteAI.

NeuralMesh by WEKA delivers the ultra-low-latency, high-throughput data access required to keep GPUs continuously fed for both training and inference. Unlike traditional storage that slows as workloads grow, WEKA's NeuralMesh architecture becomes faster and more resilient at scale. It powers high-throughput pipelines for RAG, vector search, multimodal ingestion, distributed training, and long-context inference, ensuring consistent GPU utilization and exascale performance.

The solution is built on NVIDIA AI Enterprise, with PaletteAI and WEKA aligning to ensure validated interoperability with NVIDIA NIM and NeMo microservices, providing a secure, high-performance foundation from pilot to production.

For operational efficiency at scale, PaletteAI separates platform guardrails from practitioner agility, enabling governed self-service environments, policy-based networking, and day-2 operations across hybrid, multicloud and edge locations. Combined with WEKA's intelligent monitoring and self-healing capabilities, organizations can operate AI infrastructure at massive scale without adding operational complexity.

"The NVIDIA AI Data Platform represents the future of enterprise AI infrastructure, and WEKA is proud to be one of its foundational technology partners," said Nilesh Patel, Chief Strategy Officer, WEKA. "Together with Spectro Cloud, we're transforming the AI data platform from a reference design into a living system — one that can be deployed with a click, operated at global scale, and tuned for the microsecond latency and extreme throughput that modern agentic AI and reasoning workloads demand."

Availability

The Spectro Cloud × WEKA AI Data Platform Reference Architecture with NVIDIA — validated with leading OEM platforms such as Supermicro — is available to joint customers today through both companies' sales teams and authorized partners.

About Spectro Cloud

With our Palette and PaletteAI platforms, Spectro Cloud solves how enterprises and public sector organizations manage full-stack application and AI infrastructure in any environment: from edge to cloud, and from metal to model. Using the power of cloud-native technologies like Kubernetes, we give platform engineers and operations teams flexibility to choose their perfect stack, while benefiting from complete repeatable consistency. We automate the full lifecycle of complex infrastructure at scale, for massive cost savings and better business outcomes.

About WEKA

WEKA is transforming how organizations build, run, and scale AI workflows with NeuralMesh by WEKA, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes slower and more fragile as workloads expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, growing dynamically with AI environments to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50, NeuralMesh helps leading enterprises, AI cloud providers, and AI builders optimize their GPUs, scale AI faster, and lower their innovation costs.

  • AI Data PlatformAI InfrastructureAI Factory
News Disclaimer
  • Share