Aurora Serverless v2 pricing: how to estimate ACUs and avoid surprise bills

Reviewed by CloudCostKit Editorial Team. Last updated: 2026-01-27. Editorial policy and methodology.

Start with a calculator if you need a first-pass estimate, then use this guide to validate the assumptions and catch the billing traps.


This is the Aurora Serverless v2 capacity-shape page. Stay here when the pricing question is how ACU-hours expand across baseline and peak windows rather than how the whole Aurora bill is structured.

Go back to the database parent page if the broader database budget shape is still unclear, and go back to the Aurora bill anatomy page if the broader Aurora bill structure is still unclear.

Aurora Serverless v2 is priced primarily by ACU-hours, so you should estimate it like a time-series bill: a baseline most hours, plus peaks that happen during batch jobs, traffic spikes, or incidents. If you budget from a single average ACU, you usually under-estimate.

Step 1: pick baseline and peak scenarios (don’t start with one number)

Write down two scenarios you can defend:

  • Baseline: typical ACUs during steady operation (most hours).
  • Peak: ACUs during known heavy windows (backfills, reports, spikes, deploy storms).

If you know your configured min/max, baseline is often close to min, and peak should be bounded by max (or your expected upper range).

Step 2: estimate ACU-hours

ACU-hours is just “capacity × time”. For a planning model:

  • Baseline ACU-hours = baseline ACU × baseline hours/month
  • Peak ACU-hours = peak ACU × peak hours/month
  • Total = baseline ACU-hours + peak ACU-hours

Worked example: baseline 2 ACU always-on → 2 × 730 = 1,460 ACU-hours. Peak 8 ACU for 2 hours/day → 8 × 60 = 480 ACU-hours. Total ≈ 1,940 ACU-hours/month.

Step 3: add storage and backup retention as stable baselines

  • Storage: average GB-month (and forecast if the database is growing).
  • Backups: backup GB-month driven by retention and churn.

Tools and follow-ups: storage growth and estimate backup GB-month.

Step 4: stress-test your budget (the “what if it happens weekly?” test)

The best guardrail is to ask: what if the peak window happens every day or every week? If your estimate can’t absorb that, you’ll get surprised by routine operational events (deploys, backfills, retries).

  • Backfills and migrations (data reshaping or index builds)
  • Report generation and analytics queries
  • Upstream incidents that cause retries and timeouts
  • Seasonal traffic spikes and product launches

Common pitfalls

  • Assuming serverless means “auto-pauses to zero” (v2 is not the same as older models).
  • Using a short load test and extrapolating to a month without modeling peaks.
  • Ignoring retention: backups can become a steady cost even if compute is perfect.
  • Budgeting from max capacity only (overestimates) or average only (underestimates) instead of scenarios.
  • Missing “around the DB” costs: logs, metrics, and transfer when apps aren’t co-located.

How to validate after you go live

  • In CUR / Cost Explorer, identify the ACU-hours usage line and compare baseline vs peak months.
  • Compare the time shape in metrics to billing: do peaks happen daily, weekly, or only during incidents?
  • After tuning min/max, re-check that latency and failover behavior still meet requirements.

Related guides and calculators

Sources


Related guides


Related calculators


FAQ

What is the #1 mistake in Serverless v2 estimates?
Using a single average ACU and ignoring peak periods. Costs track ACU-hours over time, so frequent short peaks can add up.
Should I model min and max capacity?
Yes. Min capacity acts like a baseline you pay most of the time. Max capacity helps you bound the peak scenario and understand worst-case spend.
What else do I need besides ACUs?
Storage GB-month and backup GB-month (retention + churn). These can become long-term baselines even when compute is right-sized.

Last updated: 2026-01-27. Reviewed against CloudCostKit methodology and current provider documentation. See the Editorial Policy .