Artifact Registry pricing (GCP): storage + downloads + egress (practical estimate)

Registries are "store bits, move bits". A reliable estimate separates three drivers: storage (GB-month), downloads (pull GB), and egress. Most surprises come from retention drift and peak pull storms (CI spikes, autoscaling, node churn).

0) Define your scope (what you are counting)

  • Artifact types: containers vs language packages vs build artifacts (sizes and churn differ).
  • Environments: prod/stage/dev repos (retention policies often drift by env).
  • Regions: multi-region clusters and cross-region runners change egress patterns.

1) Storage (GB-month)

Build storage from retention. Start with "how much do we keep" rather than "how much do we push".

Tool: Storage cost (GB-month).

  • Estimate average stored GB across the month, not just today's size.
  • For container images, layer sharing means storage is often less than "tags × size" (validate with actual usage).
  • Keep separate lines for base images vs app images (churn differs).

2) Downloads (pull volume)

Downloads scale with CI/CD and autoscaling. A good first model is: deploys/day × pulls per deploy × avg artifact size, plus a peak line for incident windows and node churn.

  • CI peaks: parallel builds and retries can create short pull storms.
  • Node churn: new nodes pull many images at once (worst during incidents).
  • Caching: node-local caching and image pre-pulling can reduce download volume dramatically.

3) Egress (split by destination)

Treat transfer as its own line item. Split by destination because pricing and billing boundaries differ.

Tools: Data egress cost, Cross-region transfer.

  • Same-region: often cheapest; keep clusters and registries co-located if possible.
  • Cross-region: multi-region clusters and remote runners can make this meaningful.
  • Internet: external users and third-party runners are the common egress driver.

Worked estimate template (copy/paste)

  • Stored GB-month = average stored GB across month (validate with registry usage)
  • Download GB/month = pulls/month × avg artifact size (baseline + peak)
  • Egress GB/month = subset of downloads billed as outbound (cross-region/internet)

Common pitfalls

  • Retention drift: old tags and layers kept forever.
  • Assuming tags × size equals storage (layer sharing changes the math).
  • Ignoring CI and scale-out peaks (pull storms are the peak scenario).
  • Cross-region pulls hidden inside "it’s internal" assumptions.
  • Not validating caching behavior (warm vs cold nodes).

How to validate

  • Validate actual stored usage and retention settings per repo.
  • Sample pull volume during peak build/deploy windows and scale-outs.
  • Validate which destinations are billed as outbound (same-region vs cross-region vs internet).

Related tools

Sources


Related guides

Cloud SQL pricing: instance-hours, storage, backups, and network (practical estimate)
A driver-based Cloud SQL estimate: instance-hours (HA + replicas), storage GB-month, backups/retention, and data transfer. Includes a worked template, common pitfalls, and validation steps for peak sizing and growth.
Bigtable cost estimation: nodes, storage growth, and transfer (practical model)
A driver-based Bigtable estimate: provisioned capacity (node-hours), stored GB-month + growth, and network transfer. Includes validation steps for hotspots, compactions, and peak throughput that force over-provisioning.
Cloud Spanner cost estimation: capacity, storage, backups, and multi-region traffic
Estimate Spanner cost using measurable drivers: provisioned capacity (baseline + peak), stored GB-month (data + indexes), backups/retention, and multi-region/network patterns. Includes a worked template, common pitfalls, and validation steps.
GCP Cloud Storage Pricing & Cost Guide
Understand Cloud Storage cost drivers: storage class, operations, retrieval, and egress with estimation steps.
Google Kubernetes Engine (GKE) pricing: nodes, networking, storage, and observability
GKE cost is not just nodes: include node pools, autoscaling, requests/limits (bin packing), load balancing/egress, storage, and logs/metrics. Includes a worked estimate template, pitfalls, and validation steps to keep clusters right-sized.
Pub/Sub pricing: deliveries, retries, fan-out, and payload transfer (practical estimate)
A practical Pub/Sub estimate: publish volume, fan-out (subscriptions), delivery attempts (retries), retention/replay scenarios, and payload transfer. Includes a worked template, pitfalls, and validation steps.

Related calculators


FAQ

What usually drives Artifact Registry cost?
Storage (GB-month) is the baseline, but heavy CI/CD and cluster churn can make downloads and egress meaningful. Peak pull storms during deploys and scale-outs are the common surprise.
How do I estimate quickly?
Estimate stored GB-month from retention, then estimate download GB/month from pull volume and average artifact size. Split same-region vs cross-region/internet destinations for egress.
How do I validate?
Validate actual stored usage, validate retention policies, and sample download volume during peak build/deploy windows. Validate whether layer sharing reduces storage vs 'tags × size'.
What is the most common mistake?
Assuming storage equals tags × image size (layer sharing changes the math) and ignoring node churn and CI spikes that create short, expensive download peaks.

Last updated: 2026-01-27