Aurora pricing (what to include): compute, storage, I/O, and backups

Reviewed by CloudCostKit Editorial Team. Last updated: 2026-01-27. Editorial policy and methodology.

This is the Aurora bill anatomy page. Stay here when the main question is which Aurora line items belong in the bill and how compute, storage, backups, and workload-driven usage stack together.

Go back to the database parent page if the broader database budget shape is still unclear, and use this page after that higher-level map is already in place.

Aurora estimates go wrong when teams model only compute. A realistic model includes at least four lines: compute, storage, backups/retention, and a “high-usage” scenario for I/O-heavy or bursty workloads. This page is a practical checklist you can use for a budget, a migration plan, or a “why did our bill spike?” review.

1) Choose the compute model (provisioned vs serverless)

  • Provisioned: estimate monthly instance-hours (including writer + readers).
  • Serverless: estimate ACU-hours as a baseline plus a peak scenario.

If you use serverless, treat capacity as time-series usage, not a single average. See Aurora Serverless v2 pricing.

2) Add storage GB-month (and model growth)

Storage is usually a steady cost that grows over time. For a first pass, estimate an average GB-month for the month you care about. For planning, build a simple forecast:

  • Current size (GB)
  • Growth per day/week (GB)
  • Retention window (how far back you keep data, if applicable)

Tool: DB storage growth calculator

3) Model backups and retention separately

Backups can become a “quiet baseline” cost, especially with long retention and high churn (frequent updates and large write volumes). Treat backup storage as its own GB-month line item and validate how retention is configured.

Next: estimate backup GB-month and backups and snapshots checklist.

4) Add a high-usage scenario for workload-driven costs

Even when you can’t price every internal detail up front, a second scenario prevents under-budgeting. Use it when you expect one or more of the patterns below.

  • Batch jobs that rewrite large tables or run heavy analytical queries
  • Hot partitions or skewed keys that cause bursts
  • Retry storms from upstream timeouts (incidents multiply usage)
  • Large backfills, migrations, or index rebuilds

5) Don’t forget the “around the database” line items

  • Read scaling: reader instances (provisioned) or reader capacity (serverless) increases compute.
  • Data transfer: cross-AZ traffic and internet egress can show up when apps aren’t co-located.
  • Logging/metrics: verbose SQL/audit logs and high-cardinality metrics can be a separate bill.

Helpful context: network transfer costs and reduce logging costs.

Worked estimate template (copy/paste)

  • Compute = (writer instances × hours) + (reader instances × hours) or ACU-hours scenarios
  • Storage = average GB-month (include forecast if storage is growing)
  • Backups = backup GB-month (retention + churn)
  • High-usage scenario = peak compute/capacity hours + any known usage multipliers

If you want a single “close enough” number, keep compute as the variable and treat storage/backups as stable baselines.

Common pitfalls

  • Modeling only “DB size” and ignoring retention (backup GB-month).
  • Using one average for serverless and missing frequent short peaks.
  • Assuming storage won’t grow (or forgetting growth from indexes and history tables).
  • Ignoring incident windows where retries multiply query volume.
  • Missing the “surrounding” bills: load balancers, NAT/egress, logs, and metrics.

How to validate the estimate

  • In Cost Explorer / CUR, group by usage type and confirm your compute, storage, and backup lines exist as separate drivers.
  • Compare your “high-usage” scenario to real peak windows (deploys, batch jobs, incidents).
  • If you’re migrating, run a short parallel period and compare the shape of usage (steady baseline vs peaks), not only a single month total.

Related guides and calculators

Sources


Related guides


FAQ

Is Aurora always cheaper than RDS?
Not always. Aurora can be a great fit, but costs depend on workload shape (I/O patterns, read scaling, storage growth), region pricing, and whether you use provisioned vs serverless.
What do I need for a first-pass estimate?
Compute hours (instances or ACUs), average storage (GB-month), and backup retention (GB-month). Then add a high-usage scenario for I/O or bursty traffic so your budget survives reality.
What typically causes surprise bills?
Workload-driven usage that isn’t captured by DB size alone: heavy I/O, rapid storage growth, long retention, and the surrounding infrastructure (load balancers, logs, transfer).

Last updated: 2026-01-27. Reviewed against CloudCostKit methodology and current provider documentation. See the Editorial Policy .