Tag Archives: Amazon AWS

Amazon AWS – ELB – Elastic Load Balancing

Basic information about Amazon ELB Service:


AWS Free Tier availability:

  • 750 hours of Elastic Load Balancing per month for one year
  • 15GB data processing


Developer Resources:



  • distribution of incoming traffic across EC2 instances in a single Availability Zone or multiple Availability Zones.
  • automatic request handling capacity scaling in response to incoming application traffic.
  • when used in a Virtual Private Cloud (VPC), you can create and manage security groups
  • when used in a VPC, you can create a load balancer without public IP addresses to serve as an internal (non-internet-facing) load balancer.
  • can detect the health of EC2 instances. When it detects unhealthy load-balanced EC2 instances, it no longer routes traffic to those and spreads the load across the remaining healthy instances.
  • Amazon Route 53 can be configured to perform DNS failover for your load balancer endpoints. If the load balancer or the application instances registered with ELB become unavailable, Route 53 will direct traffic to another ELB or destination.
  • supports the ability to stick user sessions to specific EC2 instances.
  • supports SSL termination at the Load Balancer, including offloading SSL decryption from application instances, centralized management of SSL certificates, and encryption to back-end instances with optional public key authentication.
  • flexible cipher support allows to control the ciphers and protocols that are accepted by ELB in the SSL negotiation for client connections.
  • supports use of both the Internet Protocol version 4 and 6 (IPv4 and IPv6).
  • ELB metrics such as request count and request latency are reported by Amazon CloudWatch.






Amazon AWS – SimpleDB – simple NoSQL

Basic information about Amazon SimpleDB Service:


AWS Free Tier availability:

  • 25 SimpleDB Machine Hours
  • 1GB of Storage


Developer Resources:



  • data-sets organized into domains (vs. tables in relational DB’s)
  • Domains are collections of items that are described by attribute-value pairs
  • automatically creates an index for every field in a domain
  • no need to pre-define a schema
  • scale-out by creating new domains on different instances
  • stores multiple geographically distributed copies of each domain to enable high availability and data durability.
  • a successful write (using PutAttributes, BatchPutAttributes, DeleteAttributes, BatchDeleteAttributes, CreateDomain or DeleteDomain) means that all copies of the domain will durably persist
  • by default, GetAttributes and Select perform an eventually consistent read (details below).
    • a consistent read can potentially incur higher latency and lower read throughput therefore it is best to use it only when an application scenario mandates that a read operation absolutely needs to read all writes that received a successful response prior to that read. For all other scenarios the default eventually consistent read will yield the best performance.
  • allows specifying consistency settings for each individual read request, so the same application could have disparate parts following different consistency settings.
  • currently enables domains to grow up to 10 GB each
  • initial allocation of domains is limited to 250


API Summary:

  • CreateDomain — Create a domain that contains your dataset.
  • DeleteDomain — Delete a domain.
  • ListDomains — List all domains.
  • DomainMetadata — Retrieve information about creation time for the domain, storage information both as counts of item names and attributes, as well as total size in bytes.
  • PutAttributes — Add or update an item and its attributes, or add attribute-value pairs to items that exist already. Items are automatically indexed as they are received.
  • BatchPutAttributes — For greater overall throughput of bulk writes, perform up to 25 PutAttribute operations in a single call.
  • DeleteAttributes — Delete an item, an attribute, or an attribute value.
  • BatchDeleteAttributes — For greater overall throughput of bulk deletes, perform up to 25 DeleteAttributes operations in a single call.
  • GetAttributes — Retrieve an item and all or a subset of its attributes and values.
  • Select — Query the data set in the familiar, “select target from domain_name where query_expression” syntax. Supported value tests are: =, !=, <, > <=, >=, like, not like, between, is null, is not null, and every (). Example: select * from mydomain where every(keyword) = ‘Book’. Order results using the SORT operator, and count items that meet the condition(s) specified by the predicate(s) in a query using the Count operator.


Consistency Options:

  • Eventually Consistent Reads (Default) — the eventual consistency option maximizes read performance (in terms of low latency and high throughput). However, an eventually consistent read (using Select or GetAttributes) might not reflect the results of a recently completed write (using PutAttributes, BatchPutAttributes, DeleteAttributes, BatchDeleteAttributes). Consistency across all copies of data is usually reached within a second; repeating a read after a short time should return the updated data.
  • Consistent Reads — in addition to eventual consistency, SimpleDB also gives flexibility to request a consistent read if your application, or an element of your application, requires it. A consistent read (using Select or GetAttributes with ConsistentRead=true) returns a result that reflects all writes that received a successful response prior to the read.



  • Conditional Puts/Deletes — enable to insert, replace, or delete values for one or more attributes of an item if the existing value of an attribute matches the value specified. If the value does not match or is not present, the update is rejected. Conditional Puts/Deletes are useful for preventing lost updates when different sources write concurrently to the same item.
    • Conditional puts and deletes are exposed via the PutAttributes and DeleteAttributes APIs by specifying an optional condition with an expected value.
    • For example, if the application is reserving seats or selling tickets to an event, you might allow a purchase (i.e., write update) only if the specified seat was still available (the optional condition). These semantics can also be used to implement functionality such as counters, inserting an item only if it does not already exist, and optimistic concurrency control (OCC). An application can implement OCC by maintaining a version number (or a timestamp) attribute as part of an item and by performing a conditional put/delete based on the value of this version number



  • Domain size: 10 GB per domain, 1 billion attributes per domain
  • Domain name: 3-255 characters (a-z, A-Z, 0-9, ‘_’, ‘-‘, and ‘.’)
  • Domains per account: 250
  • Attribute name-value pairs per item: 256
  • Attribute name length: 1024 bytes
  • Attribute value length: 1024 bytes
  • Item name length: 1024 bytes
  • Attribute name, attribute value, and item name allowed characters: All UTF-8 characters that are valid in XML documents. Control characters and any sequences that are not valid in XML are returned Base64-encoded. For more information, seeWorking with XML-Restricted Characters.
  • Attributes per PutAttributes operation: 256
  • Attributes requested per Select operation: 256
  • Items per BatchDeleteAttributes operation: 25
  • Items per BatchPutAttributes operation: 25
  • Maximum items in Select response: 2500
  • Maximum query execution time: 5 seconds
  • Maximum number of unique attributes per Select expression: 20
  • Maximum number of comparisons per Select expression: 20
  • Maximum response size for Select: 1MB





Amazon AWS – RDS – Relational Database Service

Basic information about Amazon RDS Service:


AWS Free Tier availability:

  • 750hrs of Micro DB Instance each month for one year,
  • 20GB of Storage,
  • 20GB for Backups


Developer Resources:


DB Engines Supported:



  • Pre-configured Parameters – RDS DB Instances are pre-configured with an appropriate set of parameters and settings appropriate for the DB Instance selected. Additional control can be achieved via DB Parameter Groups
  • Monitoring and Metrics – RDS provides CloudWatch metrics (eg. on I/O activity, compute/memory/storage capacity utilization, connections, etc ) for your DB Instance deployments at no additional charge.
  • Automatic Software Patching – you can exert optional control over when and if your DB Instance is patched via DB Engine Version Management.
  • Automated Backups – turned on by default allowing point-in-time recovery for a DB Instance. Backup of database and transaction logs up to the last five minutes. Automatic backup retention period can be configured to up to thirty five days.
  • DB Snapshots – are user-initiated backups of your DB Instance. Are stored by RDS until you explicitly delete them. You can create a new DB Instance from a DB Snapshot.
  • DB Event Notifications – RDS provides SNS notifications via email or SMS for your DB Instance deployments. You can use the RDS APIs to subscribe to over 40 different DB events associated with your RDS deployments.
  • Multi-Availability Zone (Multi-AZ) Deployments – they provide enhanced availability and durability for DB Instances. When you provision a Multi-AZ DB Instance, RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance crash, storage failure, or network disruption), RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
  • Provisioned IOPS – following applies to MySQL and Oracle database engines:
    • You can provision up to 3TB storage and 30,000 IOPS per database instance. For a workload with 50% writes and 50% reads running on an m2.4xlarge instance, you can realize up to 25,000 IOPS for Oracle. For a similar workload running on cr1.8xlarge you can realize up to 20,000 IOPS for MySQL. However, by provisioning up to 30,000 IOPS, you may be able to achieve lower latency and higher throughput. Your actual realized IOPS may vary from the amount you provisioned based on your database workload, instance type, and database engine choice.
    • You can convert from standard storage to Provisioned IOPS storage and get consistent throughput and low I/O latencies. You will encounter a short availability impact when doing so. You can independently scale IOPS (in increments of 1000) and storage on-the-fly with zero downtime. The ratio of IOPS provisioned to the storage requested (in GB) should be between 3 and 10. For example, for a database instance with 1000 GB of storage, you can provision from 3,000 to 10,000 IOPS. You can scale the IOPS up or down depending on factors such as seasonal variability of traffic to your applications.
    • If you are using SQL Server, the maximum storage you can provision is 1TB and maximum IOPS you can provision is 10,000 IOPS. The ratio of IOPS to storage (in GB) should be 10 and scaling storage or IOPS of a running DB Instance is not currently supported.
  • Push-Button Scaling – using RDS APIs or through the Management Console, you can scale the compute and memory resources powering your deployment up or down. Scale compute operations typically complete within a handful of minutes. For MySQL and Oracle database engines, as your storage requirements grow, you can also provision additional storage on-the-fly with zero downtime. If you are using RDS Provisioned IOPS with the MySQL and Oracle database engines, you can also scale the throughput of your DB Instance by specifying the IOPS rate from 1,000 IOPS to 30,000 IOPS in 1,000 IOPS increments and storage from 100GB to 3TB.
  • Automatic Host Replacement – RDS will automatically replace the compute instance powering your deployment in the event of a hardware failure.
  • Replication – RDS provides two distinct but complementary replication features: Multi-AZ deployments and Read Replicas that can be used in conjunction to gain enhanced database availability, protect your latest database updates against unplanned outages, and scale beyond the capacity constraints of a single DB Instance for read-heavy database workloads. Multi-AZ deployments are available for the MySQL and Oracle database engines. Read Replicas are currently supported for the MySQL database engine.
  • Isolation and Security– Using VPC you can isolate your DB Instances in your own virtual network, and connect to your existing IT infrastructure using industry-standard encrypted IPsec VPN. The VPC functionality is supported by all RDS DB Engines. In addition, using RDS, you can configure firewall settings and control network access to your DB Instances.
  • Resource-Level Permissions– RDS provides the ability to control the actions that your AWS IAM users and groups can take on specific Amazon RDS resources (e.g. DB Instances, DB Snapshots, DB Parameter Groups, DB Event Subscriptions, DB Options Groups). In addition, you can tag your RDS resources, and control the actions that your IAM users and groups can take on groups of resources that have the same tag (and tag value). For example, developers can modify “Development” DB Instances, but only Database Administrators can modify and delete “Production” DB Instances. For more information about Resource-Level Permissions


DB Instance Classes:

  • Micro DB Instance:
    • 630 MB memory,
    • Up to 2 ECU (for short periodic bursts),
    • 64-bit platform,
    • Low I/O Capacity,
    • Provisioned IOPS Optimized: No
  • Small DB Instance:
    • 1.7 GB memory,
    • 1 ECU (1 virtual core with 1 ECU),
    • 64-bit platform,
    • Moderate I/O Capacity,
    • Provisioned IOPS Optimized: No
  • Medium DB Instance:
    • 3.75 GB memory,
    • 2 ECU (1 virtual core with 2 ECU),
    • 64-bit platform,
    • Moderate I/O Capacity,
    • Provisioned IOPS Optimized: No
  • Large DB Instance:
    • 7.5 GB memory,
    • 4 ECUs (2 virtual cores with 2 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: 500Mbps
  • Extra Large DB Instance:
    • 15 GB of memory,
    • 8 ECUs (4 virtual cores with 2 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: 1000Mbps
  • High-Memory Extra Large DB Instance
    • 17.1 GB memory,
    • 6.5 ECU (2 virtual cores with 3.25 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: No
  • High-Memory Double Extra Large DB Instance:
    • 34 GB of memory,
    • 13 ECUs (4 virtual cores with 3.25 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: 500Mbps
  • High-Memory Quadruple Extra Large DB Instance:
    • 68 GB of memory,
    • 26 ECUs (8 virtual cores with 3.25 ECUs each),
    • 64-bit platform,
    • High I/O Capacity,
    • Provisioned IOPS Optimized: 1000Mbps
  • High-Memory Cluster Eight Extra Large DB Instance (currently supported for MySQL 5.6 only):
    • 244 GB of memory,
    • 88 ECUs,
    • 64-bit platform,
    • High I/O Capacity


One ECU provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.