Cloud Logging pricing (GCP): ingestion, retention, and query scans
Logging costs are predictable when you treat logs like data: bytes in, bytes stored, bytes scanned. The biggest wins come
from controlling ingestion volume and retention, and from preventing dashboards from scanning wide windows repeatedly.
0) Inventory sources (do not blend)
List the sources that can dominate volume and model them separately.
- Ingress/access: very high volume, moderate size.
- Firewall/WAF: can be high volume and high size (rule metadata).
- Audit/security: lower volume but required retention.
- Application logs: depends on log level and request volume.
1) Ingestion (GB)
Ingestion is the core driver. If you have usage exports, use them. If not, estimate from event rate and payload size.
Tools: Log ingestion, Estimate GB/day guide.
-
Model baseline + peak: incident traffic and verbose logging can create a peak month.
-
Sample bytes/event for the top sources instead of using one blended average.
2) Retention (GB-month)
Retention is a storage multiplier. In steady state, stored volume is roughly the last N days of logs, so retained GB is
approximately GB/day × retention days.
Tools: Retention storage, Tiered log storage.
-
Keep a short hot window for troubleshooting and archive only what you must keep long-term.
-
Split retention by log class (access vs audit vs app logs) instead of one global retention setting.
3) Query and scan (dashboards + alerts)
Query costs depend on how much data you scan and how often. Dashboards and alert rules can scan far more data than you
expect, especially with broad time windows and frequent refresh.
Tool: Log scan/search.
-
A dashboard refreshing every minute is 1,440 refreshes/day.
-
Prefer queries that filter early and use narrower time windows.
Worked estimate template (copy/paste)
- GB/day ingested = sum over sources(events/day × bytes/event ÷ 1e9)
- GB/month ingested = GB/day × 30.4 (baseline + peak)
- Retained GB ~= GB/day × retention days (split by source)
- GB scanned/month = scans/day × GB/scan × 30.4 (dashboards + alerts)
Common pitfalls
- Turning on high-volume sources without measuring their payload sizes.
- Using one blended event size that hides a dominant source (ingress/firewall).
- Keeping long retention everywhere by default.
- Dashboards/alerts scanning wide windows repeatedly (query cost explosion).
- Verbose logging during incidents (peak ingestion dominates the month).
How to validate
- Sample payload sizes for top sources and compute bytes/event.
- Validate which sources are enabled and disable/suppress the noisiest first.
- Validate retention per log class and apply tiered storage if needed.
- Audit dashboard refresh intervals and alert windows (reduce repeated wide scans).
Related tools
Sources
Related guides
Azure Log Analytics pricing: ingestion, retention, and query costs
A practical model for Log Analytics-style costs: GB ingested, retention storage, and query/scan behavior. Includes a method to estimate log GB from event rate and payload size, plus a validation checklist for high-volume sources.
Cloud Spanner cost estimation: capacity, storage, backups, and multi-region traffic
Estimate Spanner cost using measurable drivers: provisioned capacity (baseline + peak), stored GB-month (data + indexes), backups/retention, and multi-region/network patterns. Includes a worked template, common pitfalls, and validation steps.
Cloud SQL pricing: instance-hours, storage, backups, and network (practical estimate)
A driver-based Cloud SQL estimate: instance-hours (HA + replicas), storage GB-month, backups/retention, and data transfer. Includes a worked template, common pitfalls, and validation steps for peak sizing and growth.
Artifact Registry pricing (GCP): storage + downloads + egress (practical estimate)
A practical Artifact Registry cost model: stored GB-month baseline, download volume from CI/CD and cluster churn, and outbound transfer. Includes a workflow to estimate GB-month from retention and validate layer sharing and peak pull storms.
Cloud Armor pricing (GCP): model baseline traffic, attack spikes, and logging
A practical Cloud Armor estimate: baseline request volume plus an attack scenario (peak RPS × duration). Includes validation steps for spikes, rule footprint, and the secondary cost driver most teams miss: logs and analytics during incidents.
Cloud Monitoring metrics pricing (GCP): time series, sample rate, and retention
A practical metrics cost model: time series count (cardinality), sample rate, retention, and dashboard/alert query behavior. Includes validation steps to prevent high-cardinality explosions and excessive refresh patterns.
Related calculators
FAQ
What usually drives logging cost?
Ingestion volume (GB) is usually the primary driver. Retention becomes material when you keep logs for months, and query scans matter for dashboard-heavy analytics.
How do I estimate quickly?
Estimate GB/day (from provider usage or from events/sec × bytes/event), then model retention days. Add scan/search if you run frequent queries or dashboards.
What is the most common mistake?
Using one blended event size and ignoring a single dominant source (ingress/firewall/audit). Another common miss is dashboard refresh frequency: repeated scans can add up quickly.
How do I validate?
Sample real payload sizes for top sources, validate retention per source, and audit dashboards/alerts for wide windows and frequent refresh.
Last updated: 2026-01-27