Cloud SQL pricing: instance-hours, storage, backups, and network (practical estimate)
Cloud SQL cost is easiest to estimate when you treat it as "capacity + data": instance-hours for compute, GB-month for storage and backups, and a separate transfer line item when clients are cross-region or external.
0) What to measure (inputs you can actually validate)
- Instance-hours: primary + HA + replicas (baseline and peak months).
- Storage GB-month: average data + index size across the month.
- Backups/PITR: backup GB-month and retention settings.
- Transfer: outbound GB/month by destination (internet vs cross-region).
1) Compute: instance-hours (primary, HA, replicas)
Model each instance separately and keep baseline vs peak months distinct. If you run HA or read replicas, count them explicitly because they are provisioned capacity.
Tool: Compute instance cost.
- Baseline: steady traffic, normal query mix.
- Peak: batch jobs, migrations, index builds, incident retries.
- Headroom: avoid resizing churn by budgeting realistic peak capacity.
2) Storage: average GB-month (data + indexes)
Use average GB-month rather than end-of-month size. For growing datasets, end-of-month overstates cost; for shrinking datasets, it understates cost.
Tool: Database storage growth.
- Track index overhead separately if possible; it is rarely zero.
- Include staging/non-prod copies if they persist long-term.
3) Backups and retention (quiet multiplier)
Backup retention is a multiplier: the longer you keep backups, the more backup GB-month accumulates even without traffic growth. Treat backup retention as its own control knob.
- Write down retention windows (days/months) and any compliance requirements.
- Plan restore validation: a real drill often reveals that "restore" is not just a button.
4) Network transfer (cross-region and external clients)
Network cost shows up when clients are outside the region or when analytics jobs pull large result sets. Split transfer by destination and validate whether traffic is private or public.
Tool: Egress cost.
Worked estimate template (copy/paste)
- Instance-hours = primary + HA + replicas (baseline + peak)
- Primary storage = avg GB-month (data + indexes)
- Backup storage = backup GB-month (retention window)
- Egress = outbound GB/month (split by destination)
Common pitfalls
- Forgetting HA/replicas (capacity doubles before any traffic increase).
- Using end-of-month storage size instead of average GB-month (misstates cost for growing datasets).
- Long backup retention and PITR storage growing silently.
- Cross-region clients or large query exports creating surprise egress.
How to validate
- Validate peak CPU/memory/IO and the heaviest query patterns.
- Validate storage growth rate and index overhead.
- Validate backup retention and any PITR settings.
- Validate network paths (private vs public/cross-region) and billable transfer boundaries.