Fargate vs EKS cost: what usually decides the winner
Start with a calculator if you need a first-pass estimate, then use this guide to validate the assumptions and catch the billing traps.
This page is the Fargate-vs-EKS orchestration-platform comparison page, not the Fargate bill-boundary page: the job is to compare pay-per-task with pay-per-node plus cluster overhead under realistic consolidation and headroom assumptions.
Fargate vs EKS cost is mostly a trade-off between pay-per-task and pay-per-node. Fargate charges the resources of running tasks; EKS usually means paying for a fleet of nodes (plus per-cluster overhead). This guide shows what typically decides the winner and how to compare consistently.
If you still need to separate Fargate compute from load balancers, logs, and networking before you compare with EKS, go back to the pricing guide first. Then use Fargate pricing to lock the bill boundary.
Step 0: normalize assumptions
- Average compute demand: typical vCPU and memory used (not peak-only).
- Burstiness: how often you hit peak and how quickly you scale down.
- Service count: number of workloads sharing the platform (consolidation changes EKS economics).
- Non-compute: load balancers, logs, and transfer baseline.
Step 1: model Fargate as vCPU/GB hours
- vCPU-hours = avg running tasks × vCPU per task × hours/month
- GB-hours = avg running tasks × memory GB per task × hours/month
Step 2: model EKS as nodes + fixed cluster overhead
- Control plane: fixed hourly fee per cluster (matters for many small clusters).
- Nodes: node-hours driven by requests-based sizing, headroom, and imperfect packing.
- Add-ons: ingress, logging, monitoring agents consume per-node resources and raise the baseline.
What usually decides the winner
- Workload consolidation: if many workloads share nodes efficiently, EKS often wins.
- Burstiness and schedules: if workloads are spiky or scheduled, Fargate often wins by avoiding idle.
- Cluster sprawl: many EKS clusters amplify fixed fees and duplicate add-ons; Fargate avoids per-cluster overhead.
- Headroom and fragmentation: strict topology rules, pod caps, and anti-affinity can force extra nodes on EKS.
Line items that dominate both (don’t skip)
- Load balancers: count always-on LBs and capacity units.
- Logs/metrics: ingestion GB/day, retention, and series growth (cardinality).
- Networking: NAT processed GB, cross-AZ transfer, and internet egress.
Common pitfalls
- Comparing only compute and ignoring observability and networking lines.
- Assuming perfect EKS packing (real clusters need headroom and have fragmentation).
- Running many small EKS clusters for non-prod and paying fixed fees everywhere.
- Using peak task count for Fargate budgeting instead of average.
- Ignoring “one LB per service” as a baseline cost driver.
How to validate after the first month
- Compare billed vCPU/GB hours (Fargate) or node-hours (EKS) to the model.
- Check whether average capacity matches assumptions (not just peak).
- Confirm the top 3 cost drivers and decide which lever to pull first.
Related reading
Once the platform choice is clear, use the optimization guide for production actions rather than turning this comparison page into a savings checklist.