AWS Fargate Cost Calculator
Estimate Fargate-style compute cost from average running tasks, vCPU per task, memory per task, and days per month. Compare baseline vs peak usage with your region pricing.
Maintained by CloudCostKit Editorial Team. Last updated: 2026-02-07. Editorial policy and methodology.
Best next steps
Use this calculator for the first estimate, then validate the answer with the closest guide or companion tool.
Inputs
Results
Fargate bills are mostly about steady task-hours
This page is strongest when you already know the service runs as persistent tasks. Fargate cost usually does not hinge on one request spike in isolation. It is driven by the monthly accumulation of running tasks, vCPU per task, and memory per task across real uptime windows.
- Main driver: average running tasks, not desired count and not short-lived peak count.
- Sizing lever: task shape determines whether memory or vCPU dominates the bill.
- Operational boundary: this page is compute-only, so adjacent network and observability lines stay separate.
Where Fargate estimates usually drift
Teams often misread Fargate by sizing from peak tasks, copying task definitions that carry too much headroom, or forgetting that always-on background services silently accumulate hours even when user traffic looks quiet.
- Peak-first modeling: using surge count as the monthly baseline makes always-on services look too expensive.
- Over-sized task definitions: memory-heavy task shapes can keep costs elevated even when CPU is mostly idle.
- Mixed workload shapes: cron, worker, and API services should not share one average task assumption.
- Adjacent lines: NAT, egress, ALB, logs, and metrics can rival compute but should not be hidden in this number.
What to validate before you trust the baseline
- Pull average running tasks from ECS metrics or billing-backed history rather than from deployment intent.
- Review whether memory or vCPU is the real bottleneck before changing task count.
- Separate always-on services from burst or schedule-based tasks so the model reflects real runtime behavior.
- Keep transfer, load balancers, and logs as separate cost lines instead of treating Fargate as the whole service bill.
Baseline vs autoscaling burst scenarios
| Scenario | Tasks | vCPU | Memory |
|---|---|---|---|
| Baseline | Average | Configured | Configured |
| Peak | High | Configured | Configured |
How to reconcile the first real Fargate bill
- Compare billed vCPU-hours and GB-hours to the service-level average task history instead of to peak alerts.
- Check whether the main gap came from task count, task size, or uptime before changing multiple assumptions at once.
Next steps
Example scenario
- 3 tasks running 24h/day for 30.4 days at 0.5 vCPU and 1 GB each - estimate vCPU-hours + GB-hours charges.
- For autoscaling services, use average running tasks, not peak.
- Peak 220% scenario models scaling bursts during launches.
Included
- vCPU-hour and memory GB-hour compute charges (modeled with your $/unit inputs).
- A simple steady-state model for average running tasks over a month.
- Baseline vs peak scenario table for autoscaling spikes.
Not included
- Load balancers, data transfer/egress, NAT, and private link costs (model separately).
- Logs/metrics ingestion and retention costs (model separately).
- Tiering, discounts, and rounding rules unless you reflect them in inputs.
How we calculate
- vCPU-hours = tasks x vCPU per task x (hours/day x days/month).
- GB-hours = tasks x memory (GB) per task x (hours/day x days/month).
- Hours/month = hours/day x days/month.
- Total = (vCPU-hours x $/vCPU-hour) + (GB-hours x $/GB-hour).
FAQ
What should I use for days per month?
How do I model autoscaling?
Does this include data transfer and logs?
Related tools
Related guides
Disclaimer
Educational use only. Not legal, financial, or professional advice. Results are estimates based on the inputs and assumptions shown on this page. Verify pricing and limits with your providers and documentation.
Last updated: 2026-02-07. Reviewed against CloudCostKit methodology and current provider documentation. See the Editorial Policy .