In the world of cloud storage, not all data is accessed equally. Some files are needed daily, while others—like backups, archives, and compliance records—may sit untouched for months or even years. Storing such data in standard, high-availability storage classes like Amazon S3 Standard would be unnecessarily expensive. That’s where Amazon S3 Glacier comes in—a suite of low-cost storage classes designed for long-term retention and infrequent access.
Choose Glacier Flexible Retrieval if you might need data within hours or want retrieval flexibility.
Choose Deep Archive for data you’ll almost never access (e.g., 7+ year compliance logs).
Avoid both for active data (use S3 Standard/Intelligent-Tiering instead).
Hot Data (Frequent Access)
S3 Standard
📌 Use case: Active data (daily reads/writes).
💰 Cost: Highest storage, lowest retrieval.
⚡ Speed: Instant access.
Cool Data (Infrequent Access)
S3 Standard-IA
📌 Use case: Backup, disaster recovery (accessed ~1x/month).
💰 Cost: Lower storage, higher retrieval fees than Standard.
⚡ Speed: Instant access.
S3 One Zone-IA
📌 Use case: Non-critical, reproducible data (saves 20% vs Standard-IA).
⚠ Risk: Data lost if AZ fails.
Cold Data (Rare Access)
S3 Glacier Instant Retrieval
📌 Use case: Archives needing millisecond access (e.g., medical records).
💰 Cost: Cheaper than IA, but pay per retrieval.
S3 Glacier Flexible Retrieval
📌 Use case: Long-term backups (retrieval in mins–hours).
💰 Cost: Very low storage, slower/cheaper retrievals.
S3 Glacier Deep Archive
📌 Use case: Compliance, "write once, read never" data.
💰 Cost: Cheapest, retrieval in 12+ hours.
Auto-Optimizing Tier
S3 Intelligent-Tiering
🤖 Auto-moves data between hot/cold tiers.
💡 Best for: Unknown/unpredictable access patterns.
When to Use What?
Daily active data -> S3 Standard
Infrequent, instant access -> Standard-IA / Instant Retrieval
Long-term backups -> Flexible Retrieval
Lowest cost, no rush -> Deep Archive
Auto-cost optimization -> Intelligent-Tiering
Rule of thumb: The less you access data, the cheaper (but slower) storage you should use. 🚀
5TB is the maximum file size that can be stored in S3.
Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 GB.
The total volume of data and number of objects you can store in Amazon S3 are unlimited.
By enabling block public access settings at the account level, the developer can ensure that the settings apply to all current and future S3 buckets in the account.