Lambda vs Fargate cost: a practical comparison (unit economics)

Lambda vs Fargate cost comparisons work best when you convert both into the same mental model: cost per unit of work. Lambda is “requests + GB-seconds”. Fargate is “average running tasks × vCPU/GB-hours”. This guide shows how to compare and what usually decides the winner.

Step 0: define the unit of work

  • API workloads: cost per 1M requests at typical duration and payload size.
  • Jobs/queues: cost per 1M messages or cost per job run.
  • Streaming: cost per GB processed or per batch window.

Step 1: model Lambda with unit economics

  • Requests: invocations per month (often priced per 1M).
  • Compute: GB-seconds = duration × configured memory.
  • Peak scenario: include incident/retry windows and long-tail duration.

Step 2: model Fargate from average running tasks

  • vCPU-hours = avg tasks × vCPU per task × hours/month
  • GB-hours = avg tasks × memory GB per task × hours/month
  • Schedules: non-prod and batch often run far less than 730 hours/month

What usually decides the winner

  • Burstiness: if the workload is idle most of the time, Lambda often wins by not paying a baseline.
  • Always-on behavior: if you need constant capacity (steady API traffic), Fargate’s predictable baseline can win.
  • Cold starts and tail latency: cold-start mitigation (like provisioned concurrency) can change Lambda economics.
  • Operational model: long-lived services with many connections often fit Fargate better; event jobs often fit Lambda better.

Include the non-compute bills (they matter in both)

  • Logs: ingestion GB/day + retention; verbose logs can dominate.
  • Load balancers: always-on baseline for many service architectures.
  • Networking: NAT processed GB, cross-AZ transfer, and internet egress.

Common pitfalls

  • Comparing Lambda with a single duration number and missing long-tail/incident windows.
  • Forgetting provisioned concurrency cost when you enable it for latency.
  • Budgeting Fargate from peak tasks instead of average tasks (idle is the cost).
  • Ignoring logs and transfer until they become top drivers.
  • Not validating with billing after the first month (assumptions drift quickly).

How to validate after you choose

  • For Lambda: compare billed GB-seconds and request count to the modeled unit economics.
  • For Fargate: compare billed vCPU/GB hours to the modeled average running tasks.
  • For both: verify logs, LB, and transfer lines match your assumptions.

Related guides

Sources


Related guides

AWS Fargate pricing (cost model + pricing calculator)
A practical Fargate pricing guide and calculator companion: what drives compute cost (vCPU-hours + GB-hours), how to estimate average running tasks, and the non-compute line items that usually matter (logs, load balancers, data transfer).
Fargate vs EC2 cost: how to compare compute, overhead, and hidden line items
A practical Fargate vs EC2 cost comparison: normalize workload assumptions, compare unit economics (vCPU/memory-hours vs instance-hours), and include the line items that change the answer (idle capacity, load balancers, logs, transfer).
EC2 cost estimation: a practical model (compute + the hidden line items)
A practical EC2 cost estimation guide: model instance-hours with uptime and blended rates, then add the hidden line items that often dominate (EBS, snapshots, load balancers, NAT/egress, logs).
ECS vs EKS cost: a practical checklist (compute, overhead, and add-ons)
Compare ECS vs EKS cost with a consistent checklist: compute model, platform overhead, scaling behavior, and the line items that often dominate (load balancers, logs, data transfer).
Fargate vs EKS cost: what usually decides the winner
A practical Fargate vs EKS cost comparison: normalize workload assumptions, compare task-hours vs node-hours, include EKS fixed overhead (cluster fee + add-ons), and account for the line items that dominate both (LBs, logs, transfer).
API Gateway vs ALB vs CloudFront cost: what to compare (requests, transfer, add-ons)
A practical cost comparison of API Gateway, Application Load Balancer (ALB), and CloudFront. Compare request pricing, data transfer, caching impact, WAF, logs, and the hidden line items that change the answer.

Related calculators


FAQ

When does Lambda tend to be cheaper than Fargate?
When workloads are spiky or low-volume, because you pay per invocation and only for execution time. Lambda can be especially cost-effective for event-driven and intermittent tasks.
When does Fargate tend to be cheaper than Lambda?
When workloads are steady and always-on, because you can run a predictable baseline of tasks and avoid per-invocation overhead. Fargate also avoids some cold-start behavior for long-running services.
What should I compare besides compute cost?
Latency behavior (cold starts), scaling model, and the adjacent bills: logs, load balancers, and networking transfer. Those often change the decision more than compute rates.

Last updated: 2026-01-27