AWS Lambda vs Fargate Cost Calculator
Compare Lambda (requests + GB-seconds) vs Fargate (vCPU-hours + GB-hours) using your own pricing inputs. See baseline vs peak differences in a compute-only comparison.
Maintained by CloudCostKit Editorial Team. Last updated: 2026-01-28. Editorial policy and methodology.
Best next steps
Use this calculator for the first estimate, then validate the answer with the closest guide or companion tool.
Lambda inputs
Fargate inputs
Comparison
Breakdown (sanity checks)
Scenario presets
Reset
Lambda vs Fargate is a burst-shape question first
This comparison works best when you frame the workload by shape before you frame it by price. Lambda is strongest when execution is bursty and idle time is real. Fargate becomes stronger as work turns into steady, always-on capacity that keeps tasks busy for much of the month.
- Lambda lens: request count, duration, and memory per execution.
- Fargate lens: average running tasks, task size, and uptime window.
- Boundary line: background work and persistent concurrency often move the decision toward Fargate.
What usually distorts this comparison
- Cold-start or peak duration is used as the average duration for every Lambda invocation.
- Provisioned concurrency or steady background traffic is ignored, making Lambda look cheaper than real operation.
- Fargate is modeled as always-on even when the service only runs for specific windows.
- Retries, logs, and networking are forgotten even though they can move the real platform decision.
When a hybrid model is more honest than either extreme
- Use Lambda for burst and event fan-out when requests are spiky and idle gaps are meaningful.
- Use Fargate for steady baselines, warm state, or long-running workers that would keep Lambda active continuously.
- Model the baseline and the burst separately if the system genuinely has both shapes.
- Do not force a binary answer when the workload already behaves like two different systems.
Baseline vs spike-driven runtime scenarios
| Scenario | Lambda requests | Fargate tasks | Notes |
|---|---|---|---|
| Baseline | Expected | Average | Normal traffic |
| Peak | High | High | Launch or incident |
How to review the first real month
- Check Lambda requests and GB-seconds against billing and verify that the modeled average duration matches observed execution data.
- Check Fargate average task count and runtime against ECS history instead of against deployment peaks.
Next steps
Example scenario
- 50M requests/month at 120 ms and 512 MB vs 3 tasks running 24/7 at 0.5 vCPU and 1 GB.
- For steady high-throughput workloads, Fargate often wins; for bursty spiky workloads, Lambda often wins.
- Peak 180% scenario helps validate incident months.
Included
- Lambda request cost and compute cost (GB-seconds) with optional free tier.
- Fargate vCPU-hour and memory GB-hour compute costs from average running tasks.
- Days/month input to align with billing cycles for Fargate.
- Side-by-side monthly totals and differences.
- Baseline vs peak scenario table for workload spikes.
Not included
- Logs/metrics costs (often meaningful for services with high volume).
- Networking costs (NAT, egress, cross-AZ/cross-region transfer).
- Load balancers, retries/timeouts, and ancillary services.
How we calculate
- Lambda: request cost + (duration x memory) GB-seconds x $/GB-second (minus free tier if enabled).
- Fargate: tasks x vCPU x hours x $/vCPU-hour + tasks x memory(GB) x hours x $/GB-hour.
- Hours = days per month x hours per day.
- Compare the compute-only monthly totals, then add other line items in your overall model.
FAQ
Is this an official AWS quote?
Why does the winner change by workload shape?
What should I model next after compute?
Related tools
Related guides
Disclaimer
Educational use only. Not legal, financial, or professional advice. Results are estimates based on the inputs and assumptions shown on this page. Verify pricing and limits with your providers and documentation.
Last updated: 2026-01-28. Reviewed against CloudCostKit methodology and current provider documentation. See the Editorial Policy .