Fargate vs EC2 cost: how to compare compute, overhead, and hidden line items
Start with a calculator if you need a first-pass estimate, then use this guide to validate the assumptions and catch the billing traps.
This page is the Fargate-vs-EC2 host-model comparison page, not the Fargate bill-boundary page: the goal is to compare pay-per-task against pay-for-instance under real idle, packing, storage, and operations assumptions.
Fargate vs EC2 cost is mostly a question of idle capacity and packing efficiency. Fargate bills the resources of running tasks; EC2 bills the whole instance whether tasks use it or not. Use this checklist to compare with consistent assumptions.
If you still need to decide what belongs inside the Fargate bill before you compare platforms, go back to the pricing guide first. Then use Fargate pricing to lock the bill boundary.
Step 0: normalize assumptions (otherwise you compare apples to oranges)
- Average demand: typical vCPU and memory actually used (not peak-only).
- Burstiness: how often you hit peak and how quickly you scale up/down.
- Scheduling: always-on vs business-hours vs batch windows.
- Non-compute: load balancers, logs, and data transfer baseline.
If you only know traffic, start with RPS → monthly requests and validate later with metrics.
Step 1: model Fargate as task resources × hours
- vCPU-hours = avg running tasks × vCPU per task × hours/month
- GB-hours = avg running tasks × memory (GB) per task × hours/month
Tool: Fargate vs EC2 calculator
Step 2: model EC2 as instances × hours (then adjust for utilization)
EC2 cost depends on how well you pack tasks. If you run large instances at 20% utilization, EC2 “loses” even if the per-vCPU rate is cheaper.
- Instance-hours = instance count × hours/month
- Effective utilization: subtract overhead and fragmentation; don’t assume 100% packing.
- Attached storage: EBS volumes and snapshots are separate line items.
What usually decides the winner
- Idle: if your average demand is far below peak, Fargate often wins by not paying for idle instances.
- Packing: if you can keep instances hot (high utilization), EC2 often wins.
- Ops overhead: EC2 requires capacity management (AMIs, scaling groups, patching); if that causes overprovisioning, cost rises.
Don’t ignore the non-compute bills
- Load balancers: always-on hourly baseline plus capacity units.
- Logs: ingestion GB/day + retention (often bigger than expected).
- Networking: NAT processed GB and cross-AZ transfer from chatty services.
Common pitfalls
- Comparing peak capacity instead of average (budgets follow average).
- Assuming perfect EC2 packing and ignoring fragmentation and headroom.
- Ignoring EBS and snapshot costs for EC2-backed clusters.
- Optimizing compute and then discovering logs/transfer are the top drivers.
- Leaving non-prod always-on and attributing the waste to “service pricing”.
How to validate after you choose
- Measure average running tasks vs planned (Fargate) or average instance utilization (EC2).
- In billing, confirm compute is not being dwarfed by LB/log/transfer lines.
- After deploys/incidents, check whether you scale down quickly (or pay for long tail idle).
Related guides
Once the platform choice is clear, move to the optimization guide for production levers instead of stretching this comparison page into an action checklist.