ZÜRICH, Switzerland, March 12, 2026 (EZ Newswire) -- Fluence, a decentralized compute platform, has opened public access to Fluence Console, a self-serve portal that lets developers provision GPUs across more than 70 independent data centers in over 30 regions. The move targets a growing gap in the market as AI teams face persistent capacity constraints, volatile pricing, and lengthy procurement cycles when scaling training and always-on inference.
AI infrastructure demand is changing in two important ways. Training runs continue to expand in size, while inference is increasingly becoming a permanent workload that needs stable, predictable compute. That shift is pushing more organizations to look beyond traditional hyperscale supply models, where capacity can be regional, pricing can vary sharply by configuration and commitment terms, and access to top-tier GPUs often requires longer-term contracts.
Fluence’s approach aggregates certified enterprise-grade capacity from Tier III and Tier IV facilities and exposes it through a single control plane. In practice, Fluence sources GPU inventory from multiple providers and makes it available through one console, with region-specific availability and pricing visible before deployment. Fluence says this model increases transparency and price competition by letting customers choose where to run workloads based on cost, latency, and locality.
Inventory and Deployment Options
Fluence Console currently lists more than 1,400 GPUs across 32 regions and 71 data centers, spanning popular profiles such as RTX 4090, A100 80GB, H100 80GB, H200, and L40S, depending on region and availability.
Users can deploy GPU Containers, Virtual Machines, or Bare Metal and launch infrastructure in seconds. The goal is to make it straightforward to go from selection to a running environment without bespoke vendor negotiations or one-off provisioning flows.
Example starting rates on Fluence include:
- H200 from $2.96/hr
- H100 80GB from $1.24/hr
- A100 80GB from $1.22/hr
- RTX 4090 from $0.48/hr
(Prices vary by region, configuration, and current availability.)
Transparent Pricing and Predictable Spend