Private Service Connect costs: endpoint-hours and data processed (practical model)

Private Service Connect-style networking is easiest to estimate when you split it into two line items: time (endpoint-hours) and volume (GB processed). Then validate that your traffic actually uses the private path.

0) What to measure

  • Endpoint-hours: endpoint count per environment and region, multiplied by hours/month.
  • GB processed: baseline + peak traffic through PSC endpoints.
  • Alternative path: what the traffic would cost via NAT/public egress.

1) Endpoint-hours (baseline)

Model: endpoints x hours per month. Environment sprawl (prod + staging + dev, multiple regions) often makes endpoint-hours the dominant baseline.

  • Count endpoints by environment and region; that is where growth hides.
  • Remove unused endpoints; idle endpoints still cost endpoint-hours.

2) Data processed (GB/month)

Estimate the traffic that uses PSC (artifact registries, storage, databases, APIs). Treat "GB through PSC" as separate from internet egress to avoid double-counting.

Tool: Transfer (GB/month).

  • Deployments can spike traffic (image pulls, artifact downloads). Model a peak month.
  • If services are cross-region, separate the cross-region component explicitly.

3) Compare against NAT/internet egress (two scenarios)

Build two estimates: a private-path estimate (PSC) and a public-path estimate (NAT + internet egress). PSC often reduces security risk, but it can increase baseline costs via endpoint-hours.

4) Reduce endpoint sprawl (the baseline lever)

Endpoint-hours are predictable and easy to accidentally inflate. If you have many environments and regions, add a simple governance rule: every endpoint must have an owner, a purpose, and a review date.

  • Remove unused endpoints after migrations (old and new paths often exist in parallel longer than planned).
  • Consolidate endpoints where possible (avoid one-off endpoints per team/service unless required).
  • Track endpoint inventory in IaC so drift does not create permanent baseline cost.

Worked estimate template (copy/paste)

  • Endpoint-hours = endpoints x hours/month (prod + non-prod)
  • GB processed = baseline + peak GB/month through PSC endpoints
  • Comparison = PSC scenario vs NAT/public egress scenario (avoid paying twice)

Common pitfalls

  • Endpoint sprawl across environments/regions (baseline grows quietly).
  • Traffic not actually using PSC due to DNS/routing (PSC + NAT costs at the same time).
  • Deployment spikes not modeled (container pulls and package downloads).
  • Blending GB processed and internet egress together, hiding optimization levers.

Validation checklist

  • Validate routing/DNS so traffic actually uses PSC (avoid paying for endpoints you do not use).
  • Validate endpoint count across environments (endpoint-hours scale with sprawl).
  • Validate GB/month during deployments and incident windows (peaks are not averages).

Related tools

Sources


Related guides

Bigtable cost estimation: nodes, storage growth, and transfer (practical model)
A driver-based Bigtable estimate: provisioned capacity (node-hours), stored GB-month + growth, and network transfer. Includes validation steps for hotspots, compactions, and peak throughput that force over-provisioning.
Cloud SQL pricing: instance-hours, storage, backups, and network (practical estimate)
A driver-based Cloud SQL estimate: instance-hours (HA + replicas), storage GB-month, backups/retention, and data transfer. Includes a worked template, common pitfalls, and validation steps for peak sizing and growth.
GCP load balancing pricing: hours, requests, traffic processed, and egress
A driver-based approach to load balancer cost: hours, request volume, traffic processed, and (separately) outbound egress. Includes a worked estimate template, pitfalls, and a workflow to estimate GB from RPS and response size.
Artifact Registry pricing (GCP): storage + downloads + egress (practical estimate)
A practical Artifact Registry cost model: stored GB-month baseline, download volume from CI/CD and cluster churn, and outbound transfer. Includes a workflow to estimate GB-month from retention and validate layer sharing and peak pull storms.
Dataflow pricing: worker hours, backlog catch-up, and observability (practical model)
Estimate Dataflow cost using measurable drivers: worker compute-hours, backlog catch-up scenarios (replays/backfills), data processed, and logs/metrics. Includes a worked template, pitfalls, and validation steps for autoscaling and replay patterns.
GCP VPC egress costs: estimate outbound transfer by destination (practical workflow)
A practical method to estimate GCP outbound transfer: split by destination (internet, cross-region, inter-zone, CDN origin), convert usage to GB/month, and validate boundaries. Includes a worked template, pitfalls, and optimization levers.

Related calculators


FAQ

What usually drives PSC costs?
Endpoint-hours are the baseline. Data processed becomes meaningful for high-throughput traffic through private endpoints.
How do I estimate quickly?
Count endpoints x hours per month, then estimate GB/month through those endpoints. Compare to NAT + public egress paths so you do not pay twice.
What is the most common mistake?
Paying for both paths: endpoints exist, but traffic still goes out via NAT/public egress due to DNS/routing fallbacks.
How do I validate?
Validate routing/DNS so traffic actually uses PSC, validate endpoint sprawl, and validate GB/month during deploy windows (registry pulls can spike).

Last updated: 2026-01-27