DigitalOcean, the Agentic Inference Cloud built for production AI, has announced that Workato’s AI Research Lab is leveraging its inference-optimized platform, accelerated by NVIDIA Hopper GPUs, to advance next-generation enterprise AI agents. The move delivers substantial improvements in performance, cost efficiency, and deployment speed.
Workato, a leader in agentic enterprise automation, required robust infrastructure to support distributed training and reasoning-heavy inference at production scale. After migrating workloads to DigitalOcean, the AI Research Lab immediately realized significant gains across key metrics for frontier models.
The vertically integrated, inference-optimized architecture—powered by NVIDIA Hopper GPUs—delivers consistent high throughput and low latency even under heavy load. Collaboration with DigitalOcean’s solutions architects included tuning a distributed inference setup on DigitalOcean Kubernetes (DOKS) and deploying NVIDIA Dynamo to route requests intelligently across interconnected GPU clusters.
“Before DigitalOcean, we didn’t have a dedicated solution for in-house training and multi-node serving, and that was a major blocker for AI research,” said Oscar Wu, AI Research Scientist at Workato. “DigitalOcean was the fastest provider to get us up and running, enabling us to advance our AI programs. The collaboration on performance optimization coupled with the support from the DigitalOcean team of solutions architects, accelerated our progress by roughly two to three times.”
Inference economics are critical for scaling production AI, directly affecting margins and predictability. By optimizing orchestration and eliminating redundant processing, Workato reduced inference costs by 67% while gaining a 33% hardware price-performance advantage.
“DigitalOcean lets us focus on research and advancing our models instead of managing infrastructure,” said Kevin Huang, Infrastructure Engineer at Workato AI Labs. “We can provision GPUs quickly, deploy inference workloads in production, and iterate on real customer traffic without getting bogged down in platform complexity. The speed and performance has been critical to maintaining our momentum.”
DigitalOcean’s managed environment abstracts GPU scheduling and control-plane complexity, allowing small AI teams to prioritize innovation over operations.
“As AI adoption accelerates, inference at scale is becoming the defining challenge for the industry,” said Dave Salvator, Director of Accelerated Computing Solutions at NVIDIA. “The integration of the NVIDIA accelerated computing platform with DigitalOcean’s inference-optimized platform unlocks the full potential of production-scale AI. The significant performance gains achieved by Workato highlight the impact of this collaboration.”
“Workato is pushing the frontier of agentic enterprise software,” said Paddy Srinivasan, CEO of DigitalOcean. “As AI companies move from experimentation into production, the winners will be those who can iterate quickly on real customer workloads. We’re proud to support Workato’s momentum by providing an inference-optimized environment that lets their team focus on shipping — not managing infrastructure.”
DigitalOcean is building the Agentic Inference Cloud for production AI, partnering with ambitious AI-native companies to deliver reliable, scalable inference with predictable economics.
About DigitalOcean
DigitalOcean is the Agentic Inference Cloud built for AI-native and Digital-native enterprises scaling production workloads. The platform combines production-ready GPU infrastructure with a full-stack cloud to deliver operational simplicity and predictable economics at scale. By integrating inference capabilities with core cloud services, DigitalOcean’s Agentic Inference Cloud enables customers to expand as they grow — driving durable, compounding usage over time. More than 640,000 customers trust DigitalOcean to power their cloud and AI infrastructure.