Basic information about Amazon S3 Service:
AWS Free Tier availability:
- 5GB storage,
- 20,000 Get Requests,
- 2,000 Put Requests
Developer Resources:
- Getting Started Guide
- AWS Management Console
- WSDL
- Documentation
- Release Notes
- Sample Code & Libraries
- Developer Tools
- Articles & Tutorials
- Community Forum
Functionality:
- write, read, and delete objects containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited.
- each object is stored in a bucket and retrieved via a unique, developer-assigned key.
- a bucket can be stored in one of several Regions. You can choose a Region to optimize for latency, minimize costs, or address regulatory requirements. Amazon S3 is currently available in following regions:
- US Standard (automatically routes requests to facilities in Northern Virginia or the Pacific Northwest using network maps)
- US West (Oregon),
- US West (Northern California),
- EU (Ireland),
- Asia Pacific (Singapore),
- Asia Pacific (Tokyo),
- Asia Pacific (Sydney),
- South America (Sao Paulo),
- GovCloud (US)
- objects stored in a Region never leave the Region unless you transfer them out. For example, objects stored in the EU (Ireland) Region never leave the EU.
- authentication mechanisms are provided to ensure that data is kept secure from unauthorized access. Objects can be made private or public, and rights can be granted to specific users.
- options for secure data upload/download and encryption of data at rest are provided for additional data protection.
- uses standards-based REST and SOAP interfaces designed to work with any Internet-development toolkit.
- built to be flexible so that protocol or functional layers can easily be added. The default download protocol is HTTP.
- a BitTorrent™ protocol interface is provided to lower costs for high-scale distribution.
- provides functionality to simplify manageability of data through its lifetime. Includes options for segregating data by buckets, monitoring and controlling spend, and automatically archiving data to even lower cost storage options.
- reliability backed with the Amazon S3 Service Level Agreement.
3 different S3 storage options are available:
- standard storage
- RRS
- Glacier
Standard S3 storage:
- designed for mission-critical and primary data storage
- redundant storage of data in multiple facilities and on multiple devices within each facility.
- synchronously stores data across multiple facilities before returning SUCCESS
- calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data
- performs regular, systematic data integrity checks and is built to be automatically self-healing.
- further protection via Versioning to preserve, retrieve, and restore every version of every object stored in Amazon S3 bucket
- by default, requests will retrieve the most recently written version. Older versions of an object can be retrieved by specifying a version in the request. Storage rates apply for every version stored.
- backed with the SLA
- designed for 99.999999999% durability and 99.99% availability of objects over a given year.
- designed to sustain the concurrent loss of data in two facilities.
Reduced Redundancy Storage (RRS):
- a storage option within S3 to reduce costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage.
- cost-effective, highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced
- stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive
- does not replicate objects as many times as standard Amazon S3 storage
- backed with the SLA
- designed to provide 99.99% durability and 99.99% availability of objects over a given year (durability level corresponds to an average annual expected loss of 0.01% of objects).
- designed to sustain the loss of data in a single facility.
Amazon Glacier:
- extremely low-cost storage service as a storage option for data archival.
- stores data for as little as $0.01 per gigabyte per month, and is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance.
- like other S3 storage options (Standard or RRS), objects stored in Amazon Glacier using Amazon S3’s APIs or Management Console have an associated user-defined name.
- you can get a real-time list of all of your Amazon S3 object names, including those stored using the Amazon Glacier option, only when using the Amazon S3 LIST API.
- objects stored directly in Amazon Glacier using Amazon Glacier’s APIs cannot be listed in real-time, and have a system-generated identifier rather than a user-defined name. S3 maintains the mapping between your user-defined object name and the Amazon Glacier system-defined identifier
- to restore Amazon S3 data that was stored in Amazon Glacier via the Amazon S3 APIs or Management Console, you first have to initiate a restore job.
- Restore jobs typically complete in 3 to 5 hours. Once the job is complete, you can access your data through an Amazon S3 GET request.
- backed with the SLA
- designed for 99.999999999% durability and 99.99% availability of objects over a given year.
- designed to sustain the concurrent loss of data in two facilities.
Common Use Cases:
- Content Storage and Distribution
- Storage for Data Analysis
- Backup, Archiving and Disaster Recovery
- Static Website Hosting
Resources:
- Amazon S3 SLA (http://aws.amazon.com/s3-sla)
- Product page (http://aws.amazon.com/s3/)
Tagged: Amazon AWS
Wow. That is so elegant and logical and clearly explained. Brilliantly goes through what could be a complex process and makes it obvious.