RDS vs Aurora cost: what to compare (compute, storage, I/O, and retention)
RDS vs Aurora comparisons go wrong when teams compare list prices without normalizing the workload. Use this checklist to compare apples-to-apples: compute hours, average storage, retention, and at least one peak scenario.
1) Normalize compute usage (baseline + peak)
- Instances x hours/month (provisioned)
- Average capacity and peak capacity (serverless)
- Read replicas and HA (model as additional instances/capacity)
If you can’t explain how capacity changes during incidents or batch jobs, add a “peak month” scenario. Peaks often decide the real-world winner.
2) Normalize storage and growth
Storage growth is often the long-term cost driver. Forecast multiple months, not only today's size.
Tool: DB storage growth.
- Use the same growth rate assumptions for both options.
- Include “cleanup” scenarios (data archiving, retention trimming) if you plan them.
3) Normalize backups/retention (the common surprise)
Treat backup storage as a separate line item and apply churn x retention. Confirm whether non-prod environments are retaining history longer than needed.
Guides: backups and snapshots and snapshot retention policy.
4) Map workload shape to pricing levers (the “why” behind the numbers)
- If your pricing model charges per I/O request (common in some Aurora configurations), estimate I/O sensitivity and validate from metrics where possible.
- If your workload is read-heavy, include replicas/reader capacity explicitly (don’t assume read scaling is free).
- If you need higher availability, model the extra capacity and any cross-AZ behavior consistently in both options.
5) Include workload-driven "high usage" scenarios
- Batch jobs and backfills
- Retry storms during incidents
- Large periodic index rebuilds / vacuum / maintenance windows
Decision shortcuts (when one option is usually safer)
- If the workload is predictable and you can right-size + control retention, RDS often yields a more stable budget.
- If operational overhead or scaling complexity is the pain point, Aurora can win even if list price is higher (time, reliability, and incident cost are real).
Practical next step
Use AWS RDS Cost Calculator for a baseline compute + storage + backups model, then layer Aurora-specific scenarios on top.
Validation checklist (so the comparison isn’t theoretical)
- Run both a baseline month and a peak month scenario (batch + incident windows).
- Validate backup storage and growth assumptions from billing/telemetry on the current system.
- Document what changes your answer (I/O-heavy workload, retention policy, peak scaling) for future reviews.