Pricing Structure
We keep pricing simple and competitive:
From $0.99/hr for a single GPU, scaling up to $17/hr for a full 8x V100 node (64 vCPU, 512 GB RAM, 1.9 TB NVMe). For larger workloads, we can provide custom quotes up to the scale of the entire cluster (~30 PFLOPS).
Designed to feel as simple as Lightsail but backed by supercomputing-class resources. Dedicated from $2.20/hr and shared tiers starting at $0.62/hr.
Charg Cloud’s dense, multi-petabyte CEPH Cluster is built on high-speed, distributed architecture for HPC workloads, delivering low-latency, high-throughput storage at scale
Solution
- Local NVMe (included):
- Per‑instance allocation as listed in each tier (no extra charge)
- Block Storage:
- $0.04/GB‑mo — SSD‑backed volumes for low‑latency mounts
- Object Storage (S3‑compatible):
- $0.015/GB‑mo — durable buckets; intra‑region requests included
- Snapshots:
- $0.012/GB‑mo — point‑in‑time copies of block volumes
Ingress always free. Public egress: first 10 TB free, then $0.008/GB up to 50 TB. Intra-cluster traffic unmetered. Private cross‑connect available in Dallas (pass‑through facility fees)
White-glove onboarding and research ops support are available, but early adopters will receive broad access at no additional cost. Early users will also get the opportunity to help shape the product and direction of the cloud, while benefiting from SRE-level support and community-driven improvements.
Solution
- LLM Training & Inference:
- PyTorch + DeepSpeed/Megatron, HF Transformers, token servers; NCCL tuned; one‑click multi‑node
- CFD & FEA:
- OpenFOAM (OSS) and BYOL for commercial solvers; MPI/UCX presets; parallel post‑processing
- Genomics & Life Sciences:
- Nextflow, GATK, Hail; object‑store bindings; provenance & snapshot recipes
- RAPIDS/Spark Analytics:
- GPU‑accelerated ETL & graph; prewired shuffle/storage mounts
- Custom Stack Porting:
- we package your environment (images, schedulers, data paths); white‑glove enablement from $2,500