Cloud SQL pricing: instance-hours, storage, backups, and network (practical estimate)

Cloud SQL cost is easiest to estimate when you treat it as "capacity + data": instance-hours for compute, GB-month for storage and backups, and a separate transfer line item when clients are cross-region or external.

0) What to measure (inputs you can actually validate)

  • Instance-hours: primary + HA + replicas (baseline and peak months).
  • Storage GB-month: average data + index size across the month.
  • Backups/PITR: backup GB-month and retention settings.
  • Transfer: outbound GB/month by destination (internet vs cross-region).

1) Compute: instance-hours (primary, HA, replicas)

Model each instance separately and keep baseline vs peak months distinct. If you run HA or read replicas, count them explicitly because they are provisioned capacity.

Tool: Compute instance cost.

  • Baseline: steady traffic, normal query mix.
  • Peak: batch jobs, migrations, index builds, incident retries.
  • Headroom: avoid resizing churn by budgeting realistic peak capacity.

2) Storage: average GB-month (data + indexes)

Use average GB-month rather than end-of-month size. For growing datasets, end-of-month overstates cost; for shrinking datasets, it understates cost.

Tool: Database storage growth.

  • Track index overhead separately if possible; it is rarely zero.
  • Include staging/non-prod copies if they persist long-term.

3) Backups and retention (quiet multiplier)

Backup retention is a multiplier: the longer you keep backups, the more backup GB-month accumulates even without traffic growth. Treat backup retention as its own control knob.

  • Write down retention windows (days/months) and any compliance requirements.
  • Plan restore validation: a real drill often reveals that "restore" is not just a button.

4) Network transfer (cross-region and external clients)

Network cost shows up when clients are outside the region or when analytics jobs pull large result sets. Split transfer by destination and validate whether traffic is private or public.

Tool: Egress cost.

Worked estimate template (copy/paste)

  • Instance-hours = primary + HA + replicas (baseline + peak)
  • Primary storage = avg GB-month (data + indexes)
  • Backup storage = backup GB-month (retention window)
  • Egress = outbound GB/month (split by destination)

Common pitfalls

  • Forgetting HA/replicas (capacity doubles before any traffic increase).
  • Using end-of-month storage size instead of average GB-month (misstates cost for growing datasets).
  • Long backup retention and PITR storage growing silently.
  • Cross-region clients or large query exports creating surprise egress.

How to validate

  • Validate peak CPU/memory/IO and the heaviest query patterns.
  • Validate storage growth rate and index overhead.
  • Validate backup retention and any PITR settings.
  • Validate network paths (private vs public/cross-region) and billable transfer boundaries.

Related tools

Sources


Related guides

Azure SQL Database pricing: a practical estimate (compute, storage, backups, transfer)
Model Azure SQL Database cost without memorizing price tables: compute baseline (vCore/DTU), storage GB-month + growth, backup retention, and network transfer. Includes a validation checklist and common sizing traps.
Cloud Spanner cost estimation: capacity, storage, backups, and multi-region traffic
Estimate Spanner cost using measurable drivers: provisioned capacity (baseline + peak), stored GB-month (data + indexes), backups/retention, and multi-region/network patterns. Includes a worked template, common pitfalls, and validation steps.
Artifact Registry pricing (GCP): storage + downloads + egress (practical estimate)
A practical Artifact Registry cost model: stored GB-month baseline, download volume from CI/CD and cluster churn, and outbound transfer. Includes a workflow to estimate GB-month from retention and validate layer sharing and peak pull storms.
Bigtable cost estimation: nodes, storage growth, and transfer (practical model)
A driver-based Bigtable estimate: provisioned capacity (node-hours), stored GB-month + growth, and network transfer. Includes validation steps for hotspots, compactions, and peak throughput that force over-provisioning.
Database costs explained: compute, storage growth, backups, and network
A practical framework to estimate managed database bills: baseline compute, storage GB-month growth, backups/snapshots, and the network patterns that cause surprises.
Aurora pricing (what to include): compute, storage, I/O, and backups
A practical checklist for estimating Aurora costs: instance hours (or ACUs), storage growth, I/O-heavy workloads, backups/retention, and the line items that commonly surprise budgets.

Related calculators


FAQ

What usually drives managed database cost?
Provisioned compute capacity and primary storage are the primary drivers. Backups/retention and network transfer become meaningful for long retention or cross-region access patterns.
How do I estimate quickly?
Estimate instance-hours (including HA/replicas), average storage GB-month, and backup retention. Add a separate line item for outbound transfer if clients are outside the region.
What is the most common mistake?
Sizing from average utilization and forgetting HA/replicas and retention. The bill follows provisioned capacity, not average CPU.
How do I validate?
Validate peak CPU/memory/IO, validate storage growth rate, validate backup/PITR retention, and validate network paths (private vs public/cross-region).

Last updated: 2026-01-27