Amazon AWS – CloudWatch – monitoring

Basic information about Amazon CloudWatch Service:

 

AWS Free Tier availability:

  • 10 Metrics,
  • 10 Alarms,
  • 1,000,000 API requests

 

Developer Resources:

 

Functionality:

  • Monitoring AWS resources automatically (without installing additional software):
    • Basic Monitoring for Amazon EC2 instances: ten pre-selected metrics at five-minute frequency, free of charge.
    • Detailed Monitoring for Amazon EC2 instances: seven pre-selected metrics at one-minute frequency, for an additional charge.
    • Amazon EBS volumes: eight pre-selected metrics at five-minute frequency, free of charge.
    • Elastic Load Balancers: ten pre-selected metrics at one-minute frequency, free of charge.
    • Amazon RDS DB instances: thirteen pre-selected metrics at one-minute frequency, free of charge.
    • Amazon SQS queues: eight pre-selected metrics at five-minute frequency, free of charge.
    • Amazon SNS topics: four pre-selected metrics at five-minute frequency, free of charge.
    • Amazon ElastiCache nodes: twenty-nine pre-selected metrics at one-minute frequency, free of charge.
    • Amazon DynamoDB tables: seven pre-selected metrics at five-minute frequency, free of charge.
    • AWS Storage Gateways: eleven pre-selected gateway metrics and five pre-selected storage volume metrics at five-minute frequency, free of charge.
    • Amazon Elastic MapReduce job flows: twenty-three pre-selected metrics at five-minute frequency, free of charge.
    • Auto Scaling groups: seven pre-selected metrics at one-minute frequency, optional and charged at standard pricing.
    • Estimated charges on your AWS bill: you can also choose to enable metrics to monitor your AWS charges.
  • Submitting Custom Metrics generated by your own applications (or by AWS resources not mentioned above) and having them monitored by Amazon CloudWatch. You can submit these metrics to Amazon CloudWatch via a simple Put API request.
  • Setting alarms on any of your metrics to receive notifications or take other automated actions when your metric crosses your specified threshold. You can also use alarms to detect and shut down EC2 instances that are unused or underutilized.
  • Viewing graphs and statistics for any of your metrics, and getting a quick overview of all your alarms and monitored AWS resources in one location on the Amazon CloudWatch dashboard.
  • Useing Auto Scaling to add or remove Amazon EC2 instances dynamically based on your Amazon CloudWatch metrics.

 

 

 

Resources:

Advertisements

Amazon AWS – ElastiCache – caching

Basic information about Amazon ElastiCache Service:

 

AWS Free Tier availability:

  • 750hrs of Micro Cache Node

 

Developer Resources:

 

Cache engines supported:

  • Memcached – widely adopted memory object caching system.
  • Redis – popular open-source in-memory key-value store that supports data structures such as sorted sets and lists.
    • ElastiCache supports Redis master / slave replication which can be used to achieve cross AZ redundancy.

 

Features:

  • Ease of management via the AWS Management Console.
  • Compatibility with the specific engine protocol.
  • Detailed monitoring statistics for the engine nodes at no extra cost via Amazon CloudWatch

 

Available Node Types:

  • Micro
    • Micro Cache Node (cache.t1.micro):
      • 213 MB memory,
      • Up to 2 ECU (for short periodic bursts),
      • 64-bit platform,
      • Low I/O Capacity
  • Standard
    • Small Cache Node (cache.m1.small):
      • 1.3 GB memory,
      • 1 ECU (1 virtual core with 1 ECU),
      • 64-bit platform,
      • Moderate I/O Capacity
    • Medium Cache Node (cache.m1.medium):
      • 3.35 GB memory,
      • 2 ECU (1 virtual core with 2 ECUs),
      • 64-bit platform,
      • Moderate I/O Capacity
    • Large Cache Node (cache.m1.large):
      • 7.1 GB memory,
      • 4 ECUs (2 virtual cores with 2 ECUs each),
      • 64-bit platform,
      • High I/O Capacity
    • Extra Large Cache Node (cache.m1.xlarge):
      • 14.6 GB of memory,
      • 8 ECUs (4 virtual cores with 2 ECUs each),
      • 64-bit platform,
      • High I/O Capacity
  • Enhanced
    • Extra Large Cache Node (cache.m3.xlarge):
      • 14.6 GB memory,
      • 13 ECUs (4 virtual cores with 3.25 ECUs each),
      • 64-bit platform,
      • Moderate I/O Capacity
    • Double Extra Large Cache Node (cache.m3.2xlarge):
      • 29.6 GB memory,
      • 26 ECUs (8 virtual cores with 3.25 ECUs each),
      • 64-bit platform,
      • High I/O Capacity
  • High-Memory
    • High-Memory Extra Large Cache Node (cache.m2.xlarge):
      • 16.7 GB memory,
      • 6.5 ECU (2 virtual cores with 3.25 ECUs each),
      • 64-bit platform,
      • High I/O Capacity
    • High-Memory Double Extra Large Cache Node (cache.m2.2xlarge):
      • 33.8 GB memory,
      • 13 ECUs (4 virtual cores with 3.25 ECUs each),
      • 64-bit platform,
      • High I/O Capacity
    • High-Memory Quadruple Extra Large Cache Node (cache.m2.4xlarge):
      • 68 GB memory,
      • 26 ECUs (8 virtual cores with 3.25 ECUs each),
      • 64-bit platform,
      • High I/O Capacity
  • High-CPU
    • High-CPU Extra Large Cache Node (cache.c1.xlarge):
      • 6.6 GB memory,
      • 20 ECUs (8 virtual cores with 2.5 EC2 Compute Units each),
      • 64-bit platform,
      • High I/O Capacity

 

Note: Each Cache Node Type above lists the memory available to Memcached or Redis after taking System Software overhead into account. One ECU provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. Enhanced cache nodes are only available with Memcached.

 

 

 

Resources:

Amazon AWS – ELB – Elastic Load Balancing

Basic information about Amazon ELB Service:

 

AWS Free Tier availability:

  • 750 hours of Elastic Load Balancing per month for one year
  • 15GB data processing

 

Developer Resources:

 

Features:

  • distribution of incoming traffic across EC2 instances in a single Availability Zone or multiple Availability Zones.
  • automatic request handling capacity scaling in response to incoming application traffic.
  • when used in a Virtual Private Cloud (VPC), you can create and manage security groups
  • when used in a VPC, you can create a load balancer without public IP addresses to serve as an internal (non-internet-facing) load balancer.
  • can detect the health of EC2 instances. When it detects unhealthy load-balanced EC2 instances, it no longer routes traffic to those and spreads the load across the remaining healthy instances.
  • Amazon Route 53 can be configured to perform DNS failover for your load balancer endpoints. If the load balancer or the application instances registered with ELB become unavailable, Route 53 will direct traffic to another ELB or destination.
  • supports the ability to stick user sessions to specific EC2 instances.
  • supports SSL termination at the Load Balancer, including offloading SSL decryption from application instances, centralized management of SSL certificates, and encryption to back-end instances with optional public key authentication.
  • flexible cipher support allows to control the ciphers and protocols that are accepted by ELB in the SSL negotiation for client connections.
  • supports use of both the Internet Protocol version 4 and 6 (IPv4 and IPv6).
  • ELB metrics such as request count and request latency are reported by Amazon CloudWatch.

 

 

 

Resources:

Amazon AWS – SimpleDB – simple NoSQL

Basic information about Amazon SimpleDB Service:

 

AWS Free Tier availability:

  • 25 SimpleDB Machine Hours
  • 1GB of Storage

 

Developer Resources:

 

Functionality:

  • data-sets organized into domains (vs. tables in relational DB’s)
  • Domains are collections of items that are described by attribute-value pairs
  • automatically creates an index for every field in a domain
  • no need to pre-define a schema
  • scale-out by creating new domains on different instances
  • stores multiple geographically distributed copies of each domain to enable high availability and data durability.
  • a successful write (using PutAttributes, BatchPutAttributes, DeleteAttributes, BatchDeleteAttributes, CreateDomain or DeleteDomain) means that all copies of the domain will durably persist
  • by default, GetAttributes and Select perform an eventually consistent read (details below).
    • a consistent read can potentially incur higher latency and lower read throughput therefore it is best to use it only when an application scenario mandates that a read operation absolutely needs to read all writes that received a successful response prior to that read. For all other scenarios the default eventually consistent read will yield the best performance.
  • allows specifying consistency settings for each individual read request, so the same application could have disparate parts following different consistency settings.
  • currently enables domains to grow up to 10 GB each
  • initial allocation of domains is limited to 250

 

API Summary:

  • CreateDomain — Create a domain that contains your dataset.
  • DeleteDomain — Delete a domain.
  • ListDomains — List all domains.
  • DomainMetadata — Retrieve information about creation time for the domain, storage information both as counts of item names and attributes, as well as total size in bytes.
  • PutAttributes — Add or update an item and its attributes, or add attribute-value pairs to items that exist already. Items are automatically indexed as they are received.
  • BatchPutAttributes — For greater overall throughput of bulk writes, perform up to 25 PutAttribute operations in a single call.
  • DeleteAttributes — Delete an item, an attribute, or an attribute value.
  • BatchDeleteAttributes — For greater overall throughput of bulk deletes, perform up to 25 DeleteAttributes operations in a single call.
  • GetAttributes — Retrieve an item and all or a subset of its attributes and values.
  • Select — Query the data set in the familiar, “select target from domain_name where query_expression” syntax. Supported value tests are: =, !=, <, > <=, >=, like, not like, between, is null, is not null, and every (). Example: select * from mydomain where every(keyword) = ‘Book’. Order results using the SORT operator, and count items that meet the condition(s) specified by the predicate(s) in a query using the Count operator.

 

Consistency Options:

  • Eventually Consistent Reads (Default) — the eventual consistency option maximizes read performance (in terms of low latency and high throughput). However, an eventually consistent read (using Select or GetAttributes) might not reflect the results of a recently completed write (using PutAttributes, BatchPutAttributes, DeleteAttributes, BatchDeleteAttributes). Consistency across all copies of data is usually reached within a second; repeating a read after a short time should return the updated data.
  • Consistent Reads — in addition to eventual consistency, SimpleDB also gives flexibility to request a consistent read if your application, or an element of your application, requires it. A consistent read (using Select or GetAttributes with ConsistentRead=true) returns a result that reflects all writes that received a successful response prior to the read.

 

Transactions:

  • Conditional Puts/Deletes — enable to insert, replace, or delete values for one or more attributes of an item if the existing value of an attribute matches the value specified. If the value does not match or is not present, the update is rejected. Conditional Puts/Deletes are useful for preventing lost updates when different sources write concurrently to the same item.
    • Conditional puts and deletes are exposed via the PutAttributes and DeleteAttributes APIs by specifying an optional condition with an expected value.
    • For example, if the application is reserving seats or selling tickets to an event, you might allow a purchase (i.e., write update) only if the specified seat was still available (the optional condition). These semantics can also be used to implement functionality such as counters, inserting an item only if it does not already exist, and optimistic concurrency control (OCC). An application can implement OCC by maintaining a version number (or a timestamp) attribute as part of an item and by performing a conditional put/delete based on the value of this version number

 

Limits:

  • Domain size: 10 GB per domain, 1 billion attributes per domain
  • Domain name: 3-255 characters (a-z, A-Z, 0-9, ‘_’, ‘-‘, and ‘.’)
  • Domains per account: 250
  • Attribute name-value pairs per item: 256
  • Attribute name length: 1024 bytes
  • Attribute value length: 1024 bytes
  • Item name length: 1024 bytes
  • Attribute name, attribute value, and item name allowed characters: All UTF-8 characters that are valid in XML documents. Control characters and any sequences that are not valid in XML are returned Base64-encoded. For more information, seeWorking with XML-Restricted Characters.
  • Attributes per PutAttributes operation: 256
  • Attributes requested per Select operation: 256
  • Items per BatchDeleteAttributes operation: 25
  • Items per BatchPutAttributes operation: 25
  • Maximum items in Select response: 2500
  • Maximum query execution time: 5 seconds
  • Maximum number of unique attributes per Select expression: 20
  • Maximum number of comparisons per Select expression: 20
  • Maximum response size for Select: 1MB

 

 

 

Resources:

Amazon AWS – RDS – Relational Database Service

Basic information about Amazon RDS Service:

 

AWS Free Tier availability:

  • 750hrs of Micro DB Instance each month for one year,
  • 20GB of Storage,
  • 20GB for Backups

 

Developer Resources:

 

DB Engines Supported:

 

Features:

  • Pre-configured Parameters – RDS DB Instances are pre-configured with an appropriate set of parameters and settings appropriate for the DB Instance selected. Additional control can be achieved via DB Parameter Groups
  • Monitoring and Metrics – RDS provides CloudWatch metrics (eg. on I/O activity, compute/memory/storage capacity utilization, connections, etc ) for your DB Instance deployments at no additional charge.
  • Automatic Software Patching – you can exert optional control over when and if your DB Instance is patched via DB Engine Version Management.
  • Automated Backups – turned on by default allowing point-in-time recovery for a DB Instance. Backup of database and transaction logs up to the last five minutes. Automatic backup retention period can be configured to up to thirty five days.
  • DB Snapshots – are user-initiated backups of your DB Instance. Are stored by RDS until you explicitly delete them. You can create a new DB Instance from a DB Snapshot.
  • DB Event Notifications – RDS provides SNS notifications via email or SMS for your DB Instance deployments. You can use the RDS APIs to subscribe to over 40 different DB events associated with your RDS deployments.
  • Multi-Availability Zone (Multi-AZ) Deployments – they provide enhanced availability and durability for DB Instances. When you provision a Multi-AZ DB Instance, RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance crash, storage failure, or network disruption), RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
  • Provisioned IOPS – following applies to MySQL and Oracle database engines:
    • You can provision up to 3TB storage and 30,000 IOPS per database instance. For a workload with 50% writes and 50% reads running on an m2.4xlarge instance, you can realize up to 25,000 IOPS for Oracle. For a similar workload running on cr1.8xlarge you can realize up to 20,000 IOPS for MySQL. However, by provisioning up to 30,000 IOPS, you may be able to achieve lower latency and higher throughput. Your actual realized IOPS may vary from the amount you provisioned based on your database workload, instance type, and database engine choice.
    • You can convert from standard storage to Provisioned IOPS storage and get consistent throughput and low I/O latencies. You will encounter a short availability impact when doing so. You can independently scale IOPS (in increments of 1000) and storage on-the-fly with zero downtime. The ratio of IOPS provisioned to the storage requested (in GB) should be between 3 and 10. For example, for a database instance with 1000 GB of storage, you can provision from 3,000 to 10,000 IOPS. You can scale the IOPS up or down depending on factors such as seasonal variability of traffic to your applications.
    • If you are using SQL Server, the maximum storage you can provision is 1TB and maximum IOPS you can provision is 10,000 IOPS. The ratio of IOPS to storage (in GB) should be 10 and scaling storage or IOPS of a running DB Instance is not currently supported.
  • Push-Button Scaling – using RDS APIs or through the Management Console, you can scale the compute and memory resources powering your deployment up or down. Scale compute operations typically complete within a handful of minutes. For MySQL and Oracle database engines, as your storage requirements grow, you can also provision additional storage on-the-fly with zero downtime. If you are using RDS Provisioned IOPS with the MySQL and Oracle database engines, you can also scale the throughput of your DB Instance by specifying the IOPS rate from 1,000 IOPS to 30,000 IOPS in 1,000 IOPS increments and storage from 100GB to 3TB.
  • Automatic Host Replacement – RDS will automatically replace the compute instance powering your deployment in the event of a hardware failure.
  • Replication – RDS provides two distinct but complementary replication features: Multi-AZ deployments and Read Replicas that can be used in conjunction to gain enhanced database availability, protect your latest database updates against unplanned outages, and scale beyond the capacity constraints of a single DB Instance for read-heavy database workloads. Multi-AZ deployments are available for the MySQL and Oracle database engines. Read Replicas are currently supported for the MySQL database engine.
  • Isolation and Security– Using VPC you can isolate your DB Instances in your own virtual network, and connect to your existing IT infrastructure using industry-standard encrypted IPsec VPN. The VPC functionality is supported by all RDS DB Engines. In addition, using RDS, you can configure firewall settings and control network access to your DB Instances.
  • Resource-Level Permissions– RDS provides the ability to control the actions that your AWS IAM users and groups can take on specific Amazon RDS resources (e.g. DB Instances, DB Snapshots, DB Parameter Groups, DB Event Subscriptions, DB Options Groups). In addition, you can tag your RDS resources, and control the actions that your IAM users and groups can take on groups of resources that have the same tag (and tag value). For example, developers can modify “Development” DB Instances, but only Database Administrators can modify and delete “Production” DB Instances. For more information about Resource-Level Permissions

 

DB Instance Classes:

  • Micro DB Instance:
    • 630 MB memory,
    • Up to 2 ECU (for short periodic bursts),
    • 64-bit platform,
    • Low I/O Capacity,
    • Provisioned IOPS Optimized: No
  • Small DB Instance:
    • 1.7 GB memory,
    • 1 ECU (1 virtual core with 1 ECU),
    • 64-bit platform,
    • Moderate I/O Capacity,
    • Provisioned IOPS Optimized: No
  • Medium DB Instance:
    • 3.75 GB memory,
    • 2 ECU (1 virtual core with 2 ECU),
    • 64-bit platform,
    • Moderate I/O Capacity,
    • Provisioned IOPS Optimized: No
  • Large DB Instance:
    • 7.5 GB memory,
    • 4 ECUs (2 virtual cores with 2 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: 500Mbps
  • Extra Large DB Instance:
    • 15 GB of memory,
    • 8 ECUs (4 virtual cores with 2 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: 1000Mbps
  • High-Memory Extra Large DB Instance
    • 17.1 GB memory,
    • 6.5 ECU (2 virtual cores with 3.25 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: No
  • High-Memory Double Extra Large DB Instance:
    • 34 GB of memory,
    • 13 ECUs (4 virtual cores with 3.25 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: 500Mbps
  • High-Memory Quadruple Extra Large DB Instance:
    • 68 GB of memory,
    • 26 ECUs (8 virtual cores with 3.25 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: 1000Mbps
  • High-Memory Cluster Eight Extra Large DB Instance (currently supported for MySQL 5.6 only):
    • 244 GB of memory,
    • 88 ECUs,
    • 64-bit platform,
    • High I/O Capacity

 

One ECU provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

 

 

 

Resources:

Amazon AWS – SNS – Simple Notification Service

Basic information about Amazon SNS Service:

 

AWS Free Tier availability:

  • 1 million Mobile Push Notifications
  • 100 SMS
  • 1,000 email/email-JSON
  • 100,000 HTTP/s
  • unlimited deliveries to SQS Queues

 

Developer Resources:

 

Features:

  • SNS lets you push messages to mobile devices or distributed services, via API or an easy-to-use management console.
  • you can publish a message once, and deliver it one or more times.
  • you can choose to direct unique messages to individual Apple, Google or Amazon devices, or broadcast deliveries to many mobile devices with a single publish request.
  • SNS allows you to group multiple recipients using topics. A topic is an “access point” for allowing recipients to dynamically subscribe for identical copies of the same notification. One topic can support deliveries to multiple endpoint types — for example, you can group together iOS, Android and SMS recipients. When you publish once to a topic, SNS delivers appropriately formatted copies of your message to each subscriber

 

 

 

Resources:

Amazon AWS – EBS – Elastic Block Store

Basic information about Amazon EBS Service:

 

AWS Free Tier availability:

  • 30GB of Storage,
  • 2 million I/Os,
  • 1GB of snapshot storage

 

Features:

  • storage volumes from 1 GB to 1 TB that can be mounted as devices by Amazon EC2 instances.
  • multiple volumes can be mounted to the same instance.
  • you can provision a specific level of I/O performance if desired, by choosing a Provisioned IOPS volume
  • storage volumes behave like raw, unformatted block devices, with user supplied device names and a block device interface.
  • you can create a file system on top of Amazon EBS volumes, or use them in any other way you would use a block device (like a hard drive).
  • they are placed in a specific Availability Zone, and can then be attached to instances also in that same Availability Zone.
  • each storage volume is automatically replicated within the same Availability Zone.
  • provides the ability to create point-in-time snapshots of volumes, which are persisted to Amazon S3. These snapshots can be used as the starting point for new Amazon EBS volumes, and protect data for long-term durability. The same snapshot can be used to instantiate as many volumes as you wish. These snapshots can be copied across AWS regions, making it easier to leverage multiple AWS regions for geographical expansion, data center migration and disaster recovery.
  • AWS also enables you to create new volumes from AWS hosted public data sets.
  • CloudWatch exposes performance metrics for EBS volumes, giving insight into bandwidth, throughput, latency, and queue depth
  • possible access to metrics through API

 

Volume Performance:

  • EBS provides two volume types: Standard volumes and Provisioned IOPS volumes. They differ in performance characteristics and price, allowing a user to tailor storage performance and cost to the needs of your applications. You can attach and stripe across multiple volumes of either type to increase the I/O performance available to your Amazon EC2 applications.
  • Standard volumes offer storage for applications with moderate or bursty I/O requirements. Standard volumes deliver approximately 100 IOPS on average with a best effort ability to burst to hundreds of IOPS. Standard volumes are also well suited for use as boot volumes, where the burst capability provides fast instance start-up times.
  • Provisioned IOPS volumes are designed to deliver predictable, high performance for I/O intensive workloads such as databases. With Provisioned IOPS, you specify an IOPS rate when creating a volume, and then Amazon EBS provisions that rate for the lifetime of the volume. Amazon EBS currently supports up to 4000 IOPS per Provisioned IOPS volume. You can stripe multiple volumes together to deliver thousands of IOPS per Amazon EC2 instance to your application.
  • To enable EC2 instances to fully utilize the IOPS provisioned on an EBS volume, you can launch selected instance types as “EBS-Optimized” instances. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Mbps and 1000 Mbps depending on the instance type used. When attached to EBS-Optimized instances, Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time.

 

Volume Durability:

  • EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component.
  • The durability of your volume depends both on the size of your volume and the percentage of the data that has changed since your last snapshot. As an example, volumes that operate with 20 GB or less of modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% – 0.5%, where failure refers to a complete loss of the volume. This compares with commodity hard disks that will typically fail with an AFR of around 4%, making EBS volumes 10 times more reliable than typical commodity disk drives.
  • Because Amazon EBS servers are replicated within a single Availability Zone, mirroring data across multiple Amazon EBS volumes in the same Availability Zone will not significantly improve volume durability. However, for those interested in even more durability, EBS provides the ability to create point-in-time consistent snapshots of your volumes that are then stored in Amazon S3, and automatically replicated across multiple Availability Zones. So, taking frequent snapshots of your volume is a convenient and cost effective way to increase the long term durability of your data. In the unlikely event that your Amazon EBS volume does fail, all snapshots of that volume will remain intact, and will allow you to recreate your volume from the last snapshot point

 

Snapshots:

  • EBS provides the ability to back up point-in-time snapshots of your data to Amazon S3 for durable recovery.
  • snapshots are incremental backups, meaning that only the blocks on the device that have changed since your last snapshot will be saved.
  • If you have a device with 100 GBs of data, but only 5 GBs of data has changed since your last snapshot, only the 5 additional GBs of snapshot data will be stored back to Amazon S3.
  • Even though the snapshots are saved incrementally, when you delete a snapshot, only the data not needed for any other snapshot is removed.
  • regardless of which prior snapshots have been deleted, all active snapshots will contain all the information needed to restore the volume.
  • the time to restore the volume is the same for all snapshots, offering the restore time of full backups with the space savings of incremental.
  • Snapshots can also be used to instantiate multiple new volumes, expand the size of a volume or move volumes across Availability Zones.
  • When a new volume is created, there is the option to create it based on an existing Amazon S3 snapshot. In that scenario, the new volume begins as an exact replica of the original volume. By optionally specifying a different volume size or a different Availability Zone, this functionality can be used as a way to increase the size of an existing volume or to create duplicate volumes in new Availability Zones. If you choose to use snapshots to resize your volume, you need to be sure your file system or application supports resizing a device.
  • New volumes created from existing Amazon S3 snapshots load lazily in the background. This means that once a volume is created from a snapshot, there is no need to wait for all of the data to transfer from Amazon S3 to your Amazon EBS volume before your attached instance can start accessing the volume and all of its data. If your instance accesses a piece of data which hasn’t yet been loaded, the volume will immediately download the requested data from Amazon S3, and then will continue loading the rest of the volume’s data in the background.
  • EBS shared snapshots allows to share these snapshots. With this feature, users that you have authorized can quickly use EBS shared snapshots as the basis for creating their own Amazon EBS volumes.
  • If you choose, you can also make your data available publicly to all AWS users. Users to whom you have granted access can create their own EBS volumes based on your snapshot; your original snapshot will remain intact.
  • Amazon EBS also provides the ability to copy snapshots across AWS regions, making it easier to leverage multiple AWS regions for geographical expansion, data center migration and disaster recovery.
  • Customers can copy any accessible Snapshots that are in the “available” status. This includes Snapshots that they created, Snapshots that were shared with them, and also Snapshots from the AWS Marketplace, VM Import/Export, and Storage Gateway

 

 

 

Resources: