S3 Glacier retrieval time: how long restores take by tier
Retrieval time is a workflow constraint, not just a pricing detail. Plan restore tiers and job timing so your analytics, audits, or reprocessing pipelines are not blocked by long restore windows.
What "retrieval time" means
- Restore request submitted: you ask for objects to be made readable.
- Restore completes: objects become readable for a temporary window.
- Restore window: how long the restored copy is kept available.
Tier-driven latency (plan with ranges)
- Expedited tiers: fastest, but more expensive and sometimes capacity-limited.
- Standard tiers: balanced latency and cost for typical restores.
- Bulk tiers: cheapest, but slowest; best for large backfills.
Deep Archive tiers are slower than Glacier tiers. Always plan with a buffer, not a single exact duration.
Operational factors that slow restores
- Concurrency limits: too many parallel restore jobs can queue.
- Object count: many small objects create more requests and coordination overhead.
- Job scheduling: restore windows that overlap with peak operations can delay downstream processing.
Use-case mapping (pick a tier by workflow)
- Forensics or incident response: pick faster tiers for time-sensitive investigations.
- Monthly audit backfills: use standard or bulk tiers and schedule restores ahead of deadlines.
- Disaster recovery drills: model the slowest acceptable tier and verify end-to-end timing.
Latency planning checklist
- Define the deadline for the downstream job, not just the restore.
- Account for staging time (copy to hot storage, indexing, or ETL).
- Split large restores into batches so failures do not block the whole job.
- Track restore completion metrics and alert if a restore window is missed.
Example timeline (how delays accumulate)
A bulk restore might take hours to begin, then additional time to stage and rehydrate. If your pipeline also performs checksum validation or transfers the restored data into another system, the total elapsed time can be significantly longer than the raw restore time. Budget the total pipeline duration, not just the initial restore.
Restore window planning (avoid surprises)
- Define a restore window that covers processing time plus a buffer.
- If you need repeated access, consider keeping a temporary copy in a warmer tier.
- Document who triggers restores and how long data stays available.
Automation tips
- Queue restore jobs and limit concurrency to avoid throttling.
- Tag restores with purpose (audit, backfill, incident) for reporting.
- Alert on stalled restores so downstream jobs do not fail silently.
Signals to monitor during restores
- Restore job status and completion timestamps.
- Bytes restored per hour to detect slowdowns.
- Downstream queue backlog (ETL, validation, or export tasks).
Restore workflow playbook
- Submit restore requests in batches and record job metadata.
- Wait for completion, then copy or process data in a warm tier.
- Expire temporary copies after validation to avoid long-term storage creep.
How to choose a tier (quick decision flow)
- If you need same-day access, plan for faster tiers and budget the peak month.
- If restores are rare and not urgent, use bulk tiers to reduce cost.
- If you are unsure, model two scenarios and compare cost vs latency impact.
Plan for a restore window
- Restores are temporary; define how long you need the data available.
- Make sure downstream jobs finish within the restore window.
- Use prefetch or staged restores for large backfills.
Related tools
Related guides
Estimate Glacier/Deep Archive retrieval volume (GB and requests)
How to estimate archival retrieval costs: model GB restored per month and the number of objects retrieved (requests), plus common drivers like restores, rehydration, and analytics.
Glacier/Deep Archive cost optimization (reduce restores and requests)
A practical playbook to reduce archival storage costs: reduce restores, reduce small-object request volume, and avoid minimum duration penalties. Includes validation steps and related tools.
S3 Glacier Pricing & Cost Guide (storage, retrieval, Deep Archive)
Practical S3 Glacier cost model: storage GB-month, retrieval volume and requests, and minimum duration fees.
S3 Glacier retrieval pricing per GB and per request
A practical breakdown of Glacier retrieval pricing: cost per GB retrieved plus request fees, with guidance for small-object amplification and tier selection.
Aurora pricing (what to include): compute, storage, I/O, and backups
A practical checklist for estimating Aurora costs: instance hours (or ACUs), storage growth, I/O-heavy workloads, backups/retention, and the line items that commonly surprise budgets.
Aurora Serverless v2 pricing: how to estimate ACUs and avoid surprise bills
A practical way to estimate Aurora Serverless v2 costs: ACU-hours, storage GB-month, backups/retention, and how to model peaks so your estimate survives real traffic.
FAQ
What drives Glacier retrieval time the most?
The retrieval tier (expedited/standard/bulk) and the archive class (Glacier vs Deep Archive) are the primary drivers. Job volume and concurrency limits can also slow restores.
Should I plan workflows around exact time windows?
Use ranges, not exact minutes. Retrieval time varies by tier, class, and load. Build workflows that tolerate delays.
How do I pick a retrieval tier?
Pick the cheapest tier that still meets your workflow needs. If retrieval is time-sensitive, model it as a peak month and budget for faster tiers.
Last updated: 2026-01-30