Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Enterprise AI

Datadog Launches GPU Monitoring to Optimize AI Spend and Performance


Datadog Launches GPU Monitoring to Optimize AI Spend and Performance
  • by: GlobeNewswire
  • |
  • April 23, 2026

Datadog announced that GPU Monitoring is available to customers everywhere. The new product addresses one of the most prevalent issues facing organizations today as they look for a scalable and effective way to manage expanding AI costs.

Quick Intel

  • GPU instances account for 14 percent of compute costs, per Datadog.

  • GPU Monitoring provides unified visibility across GPU fleet health, cost, and performance.

  • Helps teams scale AI without overspending through usage forecasting and capacity guidance.

  • Proactively identifies unhealthy GPUs before failures cause training delays.

  • Correlates stalled workloads directly to underlying GPUs for faster troubleshooting.

  • Now generally available to all Datadog customers.

The Challenge of Managing GPU Costs at Scale

“GPU instances account for 14 percent of compute costs—which is a huge issue as companies are struggling to build AI-first technology in scalable and smart ways. While these companies can see their costs climbing, they can’t chargeback GPU spend across business units, see workload context or identify clear next steps for improvement. As a result, it is very challenging to budget and plan in thoughtful ways,” said Yanbing Li, Chief Product Officer at Datadog. The launch of GPU Monitoring marks one of the first times a single solution provides unified visibility across the AI stack—giving customers a single view linking GPU fleet health, cost, and performance directly to the teams relying on them for faster troubleshooting of slow workloads and cost savings.

Why Existing GPU Tools Fall Short

Today, most GPU tools provide high-level device health metrics, but they don’t surface cross-functional resource contention issues, explain why training and inference workloads fail, or provide visibility into which devices are idle or ineffectively used. This lack of visibility slows down investigations and means that teams overprovision as the safest default—leading to wasted spend. GPU Monitoring streamlines this work by linking fleet telemetry directly to the workloads consuming those resources, and gives platform engineering and machine learning teams a shared view to investigate together.

Key Capabilities of GPU Monitoring

GPU Monitoring enables teams to:

  • Scale AI without overspending: With visibility and forecasting based on the usage patterns of fleets and direct guidance on whether to buy new GPUs or free up existing ones, platform teams avoid expensive purchases and long procurement cycles, machine learning teams get capacity faster, and leadership gets better ROI with predictable spend.

  • Accelerate AI delivery: Stalled workloads are correlated directly to the underlying GPUs, pods and processes running them so that teams can troubleshoot performance bottlenecks in minutes instead of hours, allowing engineers to focus on shipping AI projects.

  • Avoid costly disruptions: Unhealthy GPUs are proactively identified before failures cascade across a cluster and cause training and inference delays.

  • Maximize ROI on GPU spend: Teams are empowered and accountable for their GPU utilization and costs, and can easily pinpoint where they are overserving or underutilizing their GPUs. This allows teams to reclaim and reallocate resources in order to reduce wasted spend.

Customer Impact

“Smartly managing AI spend becomes a board-level conversation when capacity is misallocated, training and inference workloads stall, and costs escalate. We all know managing GPU costs is a huge problem we need to solve, but most companies are experimenting with solutions and it is still very difficult to get a single view of what is happening across the stack. GPU Monitoring fixes that with efficiency and reliability that we haven’t seen before,” said Li.

Kai Huang, Head of Product at Hyperbolic, added: “Datadog GPU Monitoring has made it easy for us to stay on top of our multi-tenant GPU infrastructure. We get per-instance, per-device visibility into core utilization, memory, power and thermals right out of the box with no extra setup. The dashboards are rich out of the gate and simple to customize, and standing up isolated views per customer takes minutes. Layering on LLM Observability ties it all together. We can go from a model latency spike straight to the underlying GPU metrics without switching tools. Full stack AI observability in one platform means both our team and our customers can move faster with confidence.”

About Datadog

Datadog is the leading AI-powered observability and security platform. Our SaaS platform integrates and automates infrastructure monitoring, application performance monitoring, log management, user experience monitoring, cloud security and many other capabilities to provide unified, real-time observability and security for our customers' entire technology stack. Datadog is used by organizations of all sizes and across a wide range of industries to enable digital transformation and cloud migration, drive collaboration among development, operations, security and business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and infrastructure, understand user behavior and track key business metrics.

  • GPU MonitoringAI ObservabilityAI Infrastructure
News Disclaimer
  • Share