Home
News
Tech Grid
Data & Analytics
Data Processing Data Management Analytics Data Infrastructure Data Integration & ETL Data Governance & Quality Business Intelligence DataOps Data Lakes & Warehouses Data Quality Data Engineering Big Data
Enterprise Tech
Digital Transformation Enterprise Solutions Collaboration & Communication Low-Code/No-Code Automation IT Compliance & Governance Innovation Enterprise AI Data Management HR
Cybersecurity
Risk & Compliance Data Security Identity & Access Management Application Security Threat Detection & Incident Response Threat Intelligence AI Cloud Security Network Security Endpoint Security Edge AI
AI
Ethical AI Agentic AI Enterprise AI AI Assistants Innovation Generative AI Computer Vision Deep Learning Machine Learning Robotics & Automation LLMs Document Intelligence Business Intelligence Low-Code/No-Code Edge AI Automation NLP AI Cloud
Cloud
Cloud AI Cloud Migration Cloud Security Cloud Native Hybrid & Multicloud Cloud Architecture Edge Computing
IT & Networking
IT Automation Network Monitoring & Management IT Support & Service Management IT Infrastructure & Ops IT Compliance & Governance Hardware & Devices Virtualization End-User Computing Storage & Backup
Human Resource Technology Agentic AI Robotics & Automation Innovation Enterprise AI AI Assistants Enterprise Solutions Generative AI Regulatory & Compliance Network Security Collaboration & Communication Business Intelligence Leadership Artificial Intelligence Cloud
Finance
Insurance Investment Banking Financial Services Security Payments & Wallets Decentralized Finance Blockchain Cryptocurrency
HR
Talent Acquisition Workforce Management AI HCM HR Cloud Learning & Development Payroll & Benefits HR Analytics HR Automation Employee Experience Employee Wellness Remote Work
Marketing
AI Customer Engagement Advertising Email Marketing CRM Customer Experience Data Management Sales Content Management Marketing Automation Digital Marketing Supply Chain Management Communications Business Intelligence Digital Experience SEO/SEM Digital Transformation Marketing Cloud Content Marketing E-commerce
Consumer Tech
Smart Home Technology Home Appliances Consumer Health AI
Interviews
Anecdotes
Think Stack
Press Releases
Articles
Tech Events 2025
  • Home
  • /
  • News
  • /
  • AI
  • /
  • AI Cloud
  • /
  • CoreWeave Launches AI Object Storage: Global Access, High Performance, Low Cost
  • AI Cloud

CoreWeave Launches AI Object Storage: Global Access, High Performance, Low Cost


CoreWeave Launches AI Object Storage: Global Access, High Performance, Low Cost
  • by: Source Logo
  • |
  • October 16, 2025

CoreWeave, Inc., The Essential Cloud for AI, has announced the launch of CoreWeave AI Object Storage, an industry-leading fully managed object storage service purpose-built from the ground up for AI workloads. This new offering is powered by CoreWeave’s Local Object Transport Accelerator (LOTA) technology, enabling a single dataset to be instantly accessible anywhere in the world. Crucially, it achieves this without any egress charges or request/transaction fees, removing restrictions on how and where data is used.

AI performance is fundamentally dependent on data mobility, where timely and flexible data access is paramount for innovation. High-performance AI training requires massive datasets to be located near GPU compute clusters. Traditional cloud storage often falls short of this requirement, creating constraints for developers due to latency, operational complexity, and high cost.

Quick Intel

  • CoreWeave AI Object Storage is a fully managed object storage service built specifically for AI workloads.

  • It is powered by Local Object Transport Accelerator (LOTA) technology for ultra-high-speed data access.

  • The service makes a single dataset instantly accessible globally with zero egress, request, or transaction fees.

  • Performance scales linearly with AI workloads and maintains superior throughput across distributed GPU nodes.

  • The architecture ensures high throughput via private interconnects and 400 GBps-capable ports across a multi-cloud network.

  • New automatic, usage-based pricing tiers are introduced to provide over 75 percent lower storage costs for typical AI workloads.

CoreWeave Redefines AI Storage Performance

CoreWeave AI Object Storage is engineered to solve the core limitations of conventional cloud storage for modern AI. According to Peter Salanki, Co-Founder and Chief Technology Officer at CoreWeave, “As the essential cloud for AI, every decision at every layer is focused on optimizing for efficiency and performance.” He added, “Now, we are rethinking storage from the ground up. We’ve built a system where data is no longer confined by geography or cloud boundaries, giving developers the freedom to innovate without friction or hidden costs. This is a truly game-changing shift in how AI workloads operate.”

Unlike legacy storage solutions that are geographically constrained, the CoreWeave AI Object Storage performance scales alongside AI workload growth, maintaining superior throughput across distributed GPU nodes. This consistency is guaranteed whether the nodes are located in any region, on any cloud, or on-premises. This seamless, multi-cloud networking backbone, featuring private interconnects, direct cloud peering, and 400 GBps-capable ports, guarantees data integrity for trillions of objects globally. This approach eliminates the complexities of data sprawl and resource-heavy data replication for developers.

Simplified Pricing and Enhanced Cost Efficiency

A significant component of the new offering is the introduction of three automatic, usage-based pricing tiers. This model is designed to provide greater than 75 percent lower storage costs for CoreWeave’s existing customers’ typical AI workloads.

The transparent pricing structure eliminates confusing charges, including:

  • No egress fees (charges for moving data out).

  • No request fees.

  • No tiering fees.

By aligning costs directly with actual usage, the new model offers greater flexibility and visibility, establishing CoreWeave AI Object Storage as one of the most cost-efficient and developer-friendly storage options available for the AI industry.

Morgan Fainberg, Principal Engineer at Replicate, highlighted the operational benefits: “With CoreWeave’s cross-cloud capabilities in CoreWeave AI Object Storage, we can rely on a single dataset to support models no matter where they’re deployed. This eliminates replication overhead, removes egress costs, and ensures our users always have high-performance access to the data they need to innovate."

Fueling the CoreWeave AI Ecosystem

The storage announcement is the latest development in CoreWeave's strategic expansion of its software ecosystem, following the recent launch of ServerlessRL, the first publicly available, fully managed reinforcement learning capability. CoreWeave remains dedicated to fostering an open AI ecosystem, consistently setting performance benchmarks, as demonstrated by its industry-leading MLPerf benchmark for AI workloads and Platinum rating in the SemiAnalysis ClusterMAX™ system.

The company continues to expand its capabilities through innovation, strategic investments via CoreWeave Ventures, and acquisitions such as OpenPipe (reinforcement learning), Weights & Biases (model iteration), and the pending acquisition of Monolith AI (machine learning for physics and engineering). The company also serves as the Official AI Cloud Computing Partner of the Aston Martin Aramco Formula One® Team.

Holger Mueller, Vice President and Principal Analyst at Constellation Research, noted the critical role of the underlying technology: “With cross-region and cross-cloud acceleration, CoreWeave is delivering what developers need most: consistent, high-throughput access to a single dataset without replication.” He continued, “Leveraging technologies like LOTA caching and InfiniBand networking, CoreWeave AI Object Storage ensures GPUs remain efficiently utilized across distributed environments, a critical capability for scaling next-generation AI workloads.”

Juchen Jiang, CEO, Tensormesh, commented on the performance: “While benchmarking LMCache with Cohere to store large volumes of KV-cache across a distributed cluster, we were truly impressed by the performance of LOTA, the technology behind CoreWeave AI Object Storage. Its speed and scalability are key to minimizing time-to-first-token (TTFT) and maximizing LLM throughput—regardless of context size.”

About CoreWeave

CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025.

  • AI Object StorageCore WeaveAI WorkloadsCloud StorageAI Cloud
News Disclaimer
  • Share