Build AI agents on GPU-powered infrastructure using foundation models and resources such as knowledge bases and agent routes.
DigitalOcean Gradient™ AI Agentic Cloud
Validated on 31 Jan 2025 • Last edited on 15 Aug 2025
Build, train, and deploy AI agents with the DigitalOcean Gradient™ AI Agentic Cloud:
GPU Droplets are VMs with GPUs in a single or 8 GPU configuration. We provide an AI/ML-ready image and 1-Click Models so you can get started without manual setup.
DigitalOcean Gradient™ AI Bare Metal GPUs are dedicated, single-tenant servers with eight GPUs per machine. They can run as standalone servers or as part of multi-node clusters.
1-Click Models let you deploy third-party generative AI models on DigitalOcean Gradient™ AI GPU Droplets with no additional setup or configuration.
Paperspace is a cloud-based machine learning platform that offers GPU-powered virtual machines and a Kubernetes-based container service.
Bare metal GPUs and GPU Droplets both provide GPU-based compute resources tailored to AI/ML workloads, but they’re each suited for different use cases. Learn more about the difference between bare metal GPUs and GPU Droplets.
Latest Updates
5 February 2026
-
We have enabled trace storage by default for both newly created and existing agents.
-
The following Anthropic and OpenAI models are now available on DigitalOcean Gradient™ AI Platform for serverless inference, Agent Development Kit, and creating agents:
For more information, see the Available Models page.
-
End users and agent developers can now provide feedback on the quality and helpfulness of agent responses. The feedback is collected through the chatbot interface, agent playground, and log stream traces and stored in the traces. For more information, see Provide Agent Feedback and View Conversation Logs, Traces, and Insights.
30 January 2026
-
The following OpenAI models are now available on DigitalOcean Gradient™ AI Platform for serverless inference, Agent Development Kit, and creating agents:
For more information, see the Available Models page.
-
We now support prompt caching for the following Anthropic models:
- Claude Sonnet 4.5
- Claude Sonnet 4
- Claude Opus 4.5
- Claude Opus 4.1
- Claude Opus 4
Using prompt caching with serverless inference chat completion significantly reduces the cost for inference.
18 December 2025
-
The following models are now available on DigitalOcean Gradient™ AI Platform for serverless inference and Agent Development Kit:
For more information, see the Available Models page.
For more information, see the full release notes.