RDS vs Aurora cost: what to compare (compute, storage, I/O, and retention)

RDS vs Aurora comparisons go wrong when teams compare list prices without normalizing the workload. Use this checklist to compare apples-to-apples: compute hours, average storage, retention, and at least one peak scenario.

1) Normalize compute usage (baseline + peak)

  • Instances x hours/month (provisioned)
  • Average capacity and peak capacity (serverless)
  • Read replicas and HA (model as additional instances/capacity)

If you can’t explain how capacity changes during incidents or batch jobs, add a “peak month” scenario. Peaks often decide the real-world winner.

2) Normalize storage and growth

Storage growth is often the long-term cost driver. Forecast multiple months, not only today's size.

Tool: DB storage growth.

  • Use the same growth rate assumptions for both options.
  • Include “cleanup” scenarios (data archiving, retention trimming) if you plan them.

3) Normalize backups/retention (the common surprise)

Treat backup storage as a separate line item and apply churn x retention. Confirm whether non-prod environments are retaining history longer than needed.

Guides: backups and snapshots and snapshot retention policy.

4) Map workload shape to pricing levers (the “why” behind the numbers)

  • If your pricing model charges per I/O request (common in some Aurora configurations), estimate I/O sensitivity and validate from metrics where possible.
  • If your workload is read-heavy, include replicas/reader capacity explicitly (don’t assume read scaling is free).
  • If you need higher availability, model the extra capacity and any cross-AZ behavior consistently in both options.

5) Include workload-driven "high usage" scenarios

  • Batch jobs and backfills
  • Retry storms during incidents
  • Large periodic index rebuilds / vacuum / maintenance windows

Decision shortcuts (when one option is usually safer)

  • If the workload is predictable and you can right-size + control retention, RDS often yields a more stable budget.
  • If operational overhead or scaling complexity is the pain point, Aurora can win even if list price is higher (time, reliability, and incident cost are real).

Practical next step

Use AWS RDS Cost Calculator for a baseline compute + storage + backups model, then layer Aurora-specific scenarios on top.

Validation checklist (so the comparison isn’t theoretical)

  • Run both a baseline month and a peak month scenario (batch + incident windows).
  • Validate backup storage and growth assumptions from billing/telemetry on the current system.
  • Document what changes your answer (I/O-heavy workload, retention policy, peak scaling) for future reviews.

Next steps

Sources


Related guides


FAQ

What is the fastest way to compare RDS vs Aurora cost?
Normalize the same workload: hours/month, average storage, retention, and a high-usage (peak) scenario. Then compare both options using the same traffic/data assumptions.
When does Aurora usually win?
When you benefit from Aurora's scaling and architecture, or when the operational advantages reduce time/cost elsewhere. Cost depends on usage pattern and region pricing.
When does RDS usually win?
When the workload is simple and predictable and you do not need Aurora-specific capabilities. If you can right-size instances and control retention, RDS estimates can be very stable.
What should I compare besides list price?
Compare the whole operating model: scaling behavior, storage growth, backup/retention, and how incidents and peaks change the bill. Normalize to the same workload and validate with measured usage.

Last updated: 2026-01-27