RDS vs Aurora cost: what to compare (compute, storage, I/O, and retention)
RDS vs Aurora comparisons go wrong when teams compare list prices without normalizing the workload. Use this checklist to compare apples-to-apples: compute hours, average storage, retention, and at least one peak scenario. This is the engine-choice comparison page.
Stay here when the main question is which engine model fits the same workload better. Go back to the database parent page if the broader database budget shape is still unclear.
Go back to the Aurora bill anatomy page if the broader Aurora bill structure is still unclear, then return here once the comparison is truly about engine choice under the same workload assumptions.
Use this page when engine choice is the decision
- Use this guide when you are comparing normalized workload economics across RDS and Aurora options.
- Stay here if the issue is engine choice under the same assumptions, not basic database budget ownership.
- Move back to the parent page when compute, retention, backups, and network still need to be framed as one broader system.
1) Normalize compute usage (baseline + peak)
- Instances x hours/month (provisioned)
- Average capacity and peak capacity (serverless)
- Read replicas and HA (model as additional instances/capacity)
If you can’t explain how capacity changes during incidents or batch jobs, add a “peak month” scenario. Peaks often decide the real-world winner.
2) Normalize storage and growth
Storage growth is often the long-term cost driver. Forecast multiple months, not only today's size.
Tool: DB storage growth.
- Use the same growth rate assumptions for both options.
- Include “cleanup” scenarios (data archiving, retention trimming) if you plan them.
3) Normalize backups/retention (the common surprise)
Treat backup storage as a separate line item and apply churn x retention. Confirm whether non-prod environments are retaining history longer than needed.
Guides: backups and snapshots and snapshot retention policy.
4) Map workload shape to pricing levers (the “why” behind the numbers)
- If your pricing model charges per I/O request (common in some Aurora configurations), estimate I/O sensitivity and validate from metrics where possible.
- If your workload is read-heavy, include replicas/reader capacity explicitly (don’t assume read scaling is free).
- If you need higher availability, model the extra capacity and any cross-AZ behavior consistently in both options.
5) Include workload-driven "high usage" scenarios
- Batch jobs and backfills
- Retry storms during incidents
- Large periodic index rebuilds / vacuum / maintenance windows
Decision shortcuts (when one option is usually safer)
- If the workload is predictable and you can right-size + control retention, RDS often yields a more stable budget.
- If operational overhead or scaling complexity is the pain point, Aurora can win even if list price is higher (time, reliability, and incident cost are real).
The biggest mistake on this page is comparing engines before the workload is normalized
Engine comparisons fail when teams compare list prices without holding workload shape constant. Baseline compute, peak behavior, retention, storage growth, and backup exposure all need to be normalized before one option means more than the other.
Practical next step
Use AWS RDS Cost Calculator for a baseline compute + storage + backups model, then layer Aurora-specific scenarios on top.
Validation checklist (so the comparison isn’t theoretical)
- Run both a baseline month and a peak month scenario (batch + incident windows).
- Validate backup storage and growth assumptions from billing/telemetry on the current system.
- Document what changes your answer (I/O-heavy workload, retention policy, peak scaling) for future reviews.
Next steps
If the wider database budget is still unclear, go back to database costs before narrowing into engine choice.