Azure Container Registry Pricing Guide: ACR Cost by Tier and Usage
Start with a calculator if you need a first-pass estimate, then use this guide to validate the assumptions and catch the billing traps.
If you are evaluating Azure Container Registry for real production usage, the cleanest starting point is not the tier table by itself. It is the workload around the registry. ACR cost is usually shaped by four things that belong in the same estimate: stored image data, pull behavior, billed transfer paths, and the operational reasons a team moves from Basic to Standard or Premium. Most cost surprises do not come from the list price of the tier. They come from retention drift, pull storms during CI or node churn, and cross-region patterns that were never modeled as billable transfer.
What the ACR tier changes, and what it does not
Basic, Standard, and Premium are best understood as throughput and capability choices first. The tier changes service limits, concurrency headroom, and advanced features such as geo-replication or private-networking-related patterns. It does not remove the need to understand what your registry is storing, how often clients pull, or which regions those pulls cross.
- Basic fits small teams and low pull pressure, especially when usage stays single-region and operational spikes are rare.
- Standard is usually the production choice when pull throughput or CI reliability matters more than the lowest entry price.
- Premium makes sense when architecture, not just traffic, requires features such as geo-replication or stronger enterprise controls.
Tier selection affects features and throughput ceilings, but monthly spend still tracks stored GB-month, pull behavior, and billed transfer paths.
Build the first estimate from registry behavior
A usable ACR estimate starts by defining the registry as an operating system component, not as a flat storage bucket. You need to know what is stored, who consumes it, where those consumers run, and which periods create the worst pull pressure.
- Registry scope: separate prod, staging, and dev behavior if they do not share the same repositories or pull patterns.
- Stored GB-month: measure retained image data over time, not just what was pushed this month.
- Pull volume: separate normal deploy activity from CI peaks, autoscaling events, and incident-driven node churn.
- Regional topology: keep same-region pulls, cross-region pulls, internet pulls, and replication in separate lines.
- Feature trigger: document whether the pressure is really about throughput, multi-region access, or Premium-only capabilities.
Two tools help anchor this first estimate: storage cost (GB-month) for retained data and data egress cost for billed outbound paths.
How storage, pull storms, and transfer shape the bill together
The biggest mistake on ACR pages is to explain storage, pulls, and egress as separate checkboxes. In practice, they reinforce each other. Retention policy decides your baseline. Pull activity decides whether the registry behaves like a quiet artifact store or a high-churn delivery surface. Regional topology decides whether those pulls stay local or turn into a second transfer bill.
- Storage baseline: retained image data usually grows because old tags, hotfix branches, and multi-architecture artifacts stay longer than teams expect.
- Layer sharing: storage is not a simple tags-times-size formula because common layers are reused across images.
- Pull storms: CI retries, fresh nodes, autoscaling, and incident recovery can create short windows where pull demand is far above the weekly average.
- Warm caches: node-local cache behavior matters, because assuming every deploy hits ACR will overstate some workloads and understate others.
- Transfer paths: same-region pulls, cross-region pulls, internet delivery, and replication should never be priced with one blended assumption.
- Base images vs app images: model them separately when retention and pull frequency are different.
- Compressed pull size: use actual pulled size when possible instead of repository uncompressed size.
- Cross-region clusters: treat them as transfer decisions, not as invisible internal traffic.
- Geo-replication: model replication as its own operational and transfer decision instead of assuming Premium is just a bigger version of Standard.
When a tier upgrade is justified, and when it is hiding another problem
Tier upgrades usually pay off when the registry has become a reliability bottleneck, not when someone sees a bigger storage number and assumes the answer is to move upward. The registry tier is often the symptom review. The real question is whether throughput pressure, multi-region access, or feature requirements justify the change.
- Basic to Standard is usually justified when normal CI or deploy traffic is already hitting throughput or concurrency pain.
- Standard to Premium makes sense when multi-region workloads, geo-replication, or enterprise access patterns are operationally necessary.
- Do not upgrade blindly when the real issue is cross-region topology or pull inefficiency. In that case the architecture can keep the bill high even after the tier changes.
| Question | If yes | If no |
|---|---|---|
| Is pull throughput a normal reliability issue? | Review a tier move sooner | Focus on retention and topology first |
| Do workloads run across multiple regions? | Model geo-replication and cross-region pull paths explicitly | Keep single-region assumptions separate and simpler |
| Is egress dominating more than storage? | Review architecture and pull location before upgrading | Tier choice may matter more than transfer path |
What usually goes wrong, and how to validate the estimate
Most weak ACR estimates fail because teams count artifacts but do not measure behavior. They know how many repositories they have, but they do not know how retention, peak pull windows, and billed transfer interact.
- Counting tags instead of actual retained layer storage.
- Treating a quiet week of cached pulls as the normal baseline.
- Ignoring CI retries, node churn, or incident recovery when modeling pull volume.
- Using one transfer assumption across same-region, cross-region, and internet consumers.
- Assuming Premium is the answer when the bill is really being driven by topology.
Before you trust the model, validate it against real operating signals.
- Validate retention policy, cleanup behavior, and whether old layers remain billable longer than expected.
- Sample pull counts during peak CI windows, scale-outs, and recovery events instead of only during calm periods.
- Validate image size using compressed pull size where possible.
- Validate which transfer paths are actually billed: same-region, cross-region, internet, and replication.
- Validate whether the upgrade trigger is reliability, feature need, or a hidden architecture issue.
A good sign-off rule is simple: every major number in the estimate should map back to registry stats, workload behavior, or billing data. If a large number only exists because it felt directionally right, the estimate is still too weak for budget or architecture decisions.
Next actions if you are evaluating ACR spend
If the registry estimate is being reviewed alongside cluster cost, pair this page with the AKS pricing guide so pull behavior and node churn are not analyzed in isolation.