S3 request costs: when GET/PUT/LIST becomes meaningful

Many teams ignore S3 request costs because storage and egress usually dominate. But requests become meaningful when you have millions of small objects, frequent LIST/HEAD calls, or chatty pipelines.

Request cost drivers to count

  • GET/PUT/LIST: separate high-volume request types.
  • Small objects: many small files amplify request charges.
  • Lifecycle actions: transitions and restores add requests.

What to model (requests are not one bucket)

  • Read-like: GET/HEAD (often tied to user traffic and cache hit rate)
  • Write-like: PUT/COPY/POST (often tied to ingestion pipelines and churn)
  • Listing/metadata: LIST/Inventory-style scans (can be surprisingly expensive for large namespaces)

How to estimate request fees (practical workflow)

  • Collect request counts from billing exports or access logs (preferred).
  • When you only have RPS, convert with the monthly request calculator.
  • Apply request-class pricing (do not blend classes into one number).
  • Add a peak scenario: deploys, backfills, and incidents often multiply requests.

Tool: S3 request cost calculator.

Worked estimate template (copy/paste)

  • GET/month = average GET RPS * seconds/month (baseline + peak)
  • PUT/month = objects written/month (or PUT RPS converted)
  • LIST/month = scans/month * (requests per scan) for your largest prefixes

Common pitfalls (why request bills spike)

  • Retry loops and timeouts that multiply GET/HEAD during incidents.
  • LIST over huge prefixes (full scans) instead of using a manifest or index.
  • Many small objects: high requests per GB stored and per GB transferred.
  • Chatty SDK usage: repeated HEAD calls for metadata that could be cached.
  • Backfills and migrations that look like "one-time work" but run for weeks.

How to validate the estimate

  • Confirm request unit pricing (per 1,000 / 10,000 / 1,000,000) for each request class.
  • Separate request fees from transfer: high GET volume often also implies high egress.
  • Look for a small number of noisy prefixes/endpoints driving most LIST/HEAD.
  • After the first month, reconcile billing usage types against your modeled request classes.

Safe ways to reduce request costs

  • Cache metadata (ETags, sizes) to reduce repeated HEAD calls.
  • Batch work: avoid per-object workflows when your access pattern allows it.
  • Replace full LIST scans with an index, manifest, or inventory output.
  • Fix retry storms (timeouts and missing jitter) before they become a request bill.

Related tools

Sources


Related guides


Related calculators


FAQ

When do S3 request costs matter?
When you have many small objects, high request rates (GET/PUT), frequent LIST/HEAD calls, or metadata-heavy pipelines. For large objects, storage and egress usually dominate first.
What's the fastest way to estimate requests/month?
Use billing exports or access logs. If you only have RPS, convert with the RPS to Monthly Requests calculator, then apply request-class pricing.
What creates unexpected request spikes?
Retry loops, inventory scans, LIST operations over large prefixes, chatty SDK usage, and many small object workflows (high requests per GB).
How do I reduce request costs safely?
Reduce LIST/HEAD calls, batch operations, cache metadata, and avoid repeated full-prefix scans. Always validate behavior with logs after changes.

Last updated: 2026-02-07