Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Agentic AI

DigitalOcean Boosts Workato AI with 67% Lower Costs


DigitalOcean Boosts Workato AI with 67% Lower Costs
  • by: Source Logo
  • |
  • March 4, 2026

DigitalOcean, the Agentic Inference Cloud built for production AI, has announced that Workato’s AI Research Lab is leveraging its inference-optimized platform, accelerated by NVIDIA Hopper GPUs, to advance next-generation enterprise AI agents. The move delivers substantial improvements in performance, cost efficiency, and deployment speed.

Quick Intel

  • Workato’s AI Research Lab achieves 67% lower inference costs ($0.77 per 1M tokens), 67% higher throughput (13,561 tokens per second per GPU), and 77% faster time-to-first-token (1,455 ms at high load) using DigitalOcean’s NVIDIA Hopper-powered infrastructure.
  • Time-to-value accelerates from weeks to days, representing more than 2X faster iteration for frontier models like Llama-3.3-70B.
  • DigitalOcean provided rapid access to high-performance GPUs and collaborated on distributed inference architecture using NVIDIA Dynamo for intelligent workload routing across GPU clusters.
  • The optimized setup eliminates redundant computation in long-context workloads, improving responsiveness and reducing costs under high concurrency.
  • Workato’s ONE platform, handling over 1 trillion tasks since 2013 across 14,000+ applications, now scales agentic AI that reasons, acts, and orchestrates enterprise workflows.
  • DigitalOcean’s managed Kubernetes (DOKS) abstracts complexity, enabling Workato’s lean team to focus on research and product development rather than infrastructure management.

Breakthrough Performance for Enterprise AI Agents

Workato, a leader in agentic enterprise automation, required robust infrastructure to support distributed training and reasoning-heavy inference at production scale. After migrating workloads to DigitalOcean, the AI Research Lab immediately realized significant gains across key metrics for frontier models.

The vertically integrated, inference-optimized architecture—powered by NVIDIA Hopper GPUs—delivers consistent high throughput and low latency even under heavy load. Collaboration with DigitalOcean’s solutions architects included tuning a distributed inference setup on DigitalOcean Kubernetes (DOKS) and deploying NVIDIA Dynamo to route requests intelligently across interconnected GPU clusters.

“Before DigitalOcean, we didn’t have a dedicated solution for in-house training and multi-node serving, and that was a major blocker for AI research,” said Oscar Wu, AI Research Scientist at Workato. “DigitalOcean was the fastest provider to get us up and running, enabling us to advance our AI programs. The collaboration on performance optimization coupled with the support from the DigitalOcean team of solutions architects, accelerated our progress by roughly two to three times.”

Cost Efficiency and Operational Simplicity

Inference economics are critical for scaling production AI, directly affecting margins and predictability. By optimizing orchestration and eliminating redundant processing, Workato reduced inference costs by 67% while gaining a 33% hardware price-performance advantage.

“DigitalOcean lets us focus on research and advancing our models instead of managing infrastructure,” said Kevin Huang, Infrastructure Engineer at Workato AI Labs. “We can provision GPUs quickly, deploy inference workloads in production, and iterate on real customer traffic without getting bogged down in platform complexity. The speed and performance has been critical to maintaining our momentum.”

DigitalOcean’s managed environment abstracts GPU scheduling and control-plane complexity, allowing small AI teams to prioritize innovation over operations.

“As AI adoption accelerates, inference at scale is becoming the defining challenge for the industry,” said Dave Salvator, Director of Accelerated Computing Solutions at NVIDIA. “The integration of the NVIDIA accelerated computing platform with DigitalOcean’s inference-optimized platform unlocks the full potential of production-scale AI. The significant performance gains achieved by Workato highlight the impact of this collaboration.”

“Workato is pushing the frontier of agentic enterprise software,” said Paddy Srinivasan, CEO of DigitalOcean. “As AI companies move from experimentation into production, the winners will be those who can iterate quickly on real customer workloads. We’re proud to support Workato’s momentum by providing an inference-optimized environment that lets their team focus on shipping — not managing infrastructure.”

DigitalOcean is building the Agentic Inference Cloud for production AI, partnering with ambitious AI-native companies to deliver reliable, scalable inference with predictable economics.

About DigitalOcean

DigitalOcean is the Agentic Inference Cloud built for AI-native and Digital-native enterprises scaling production workloads. The platform combines production-ready GPU infrastructure with a full-stack cloud to deliver operational simplicity and predictable economics at scale. By integrating inference capabilities with core cloud services, DigitalOcean’s Agentic Inference Cloud enables customers to expand as they grow — driving durable, compounding usage over time. More than 640,000 customers trust DigitalOcean to power their cloud and AI infrastructure.

  • Agentic AIGenerative AICloud AIEnterprise AI
News Disclaimer
  • Share