A Guide to Performance Challenges With AWS EC2: Part 2
Using Amazon Web Services? Learn how to get your Elastic Compute Cloud instances to perform better than your competitors.
Join the DZone community and get the full member experience.Join For Free
Last week, we kicked off our series on your guide to the top five performance challenges you might come across managing an AWS EC2 environment, and how to best address them. We started off with the ins and outs of running your virtual machines in Amazon’s cloud, and how to navigate your way through a multi-tenancy environment.
Poor Disk I/O Performance
AWS supports several different types of storage options, the core of which include the following:
- EC2 Instance Store
- Elastic Block Store (EBS)
- Simple Storage Service (S3)
EC2 instances can access the physical disks that are attached to the machine hosting the EC2 instance to use for temporary storage. The important thing to note about this type of storage is that it is ephemeral, meaning that it only persists for the duration of the EC2 instance and then is destroyed when the EC2 instance stops. Meaningful storage, therefore, will not persist to an EC2 instance store.
For more common storage needs we’ll opt for either EBS or S3. From the perspective of how they are accessed, the main difference between the two is that EBS can be accessed through disk operations whereas S3 provides a RESTful API to store and retrieve objects. With respect to use cases, S3 is designed to store web-scale amounts of data whereas EBS is more akin to a hard drive. Therefore, when you need to access a block device from an application running on an EC2 instance and you need that data to persist between EC2 restarts, such as storage to support a database, you’ll typically leverage an EBS volume.
EBS volumes come in three flavors:
- Magnetic Volumes: magnetic volumes can range in size from 1GB to 1TB and support 40-90 MB/sec throughput. They are good for workloads where data is accessed infrequently.
- General Purpose SSD: general purpose SSDs can range in size from 1GB to 16TB and support 160 MB/sec throughput. They are good for use cases such as system boot volumes, virtual desktops, small to medium sized databases, and development and test environments.
- Provisioned IOPS SSD: provisioned IOPS (Input/Output Per Second) SSDs can range from 4GB to 16TB in size and support 320 MB/sec throughput. They are good for critical business applications that required sustained IOPS performance, or more than 10,000 IOPS (160 MB/sec), such as MongoDB, SQL Server, MySQL, PostgreSQL, and Oracle
Choosing the correct EBS volume type is important, but it is also important to understand what these metrics mean and how they impact your EC2 instance and disk I/O operations.
- First, IOPS are measured in terms of a 16K I/O block size, so if your application is writing 64K then it will use 4 IOPS
- Next, in order to realize the IOPS capacity, you need to send enough requests to the EBS volume to match its queue length, or number of pending operations supported by the volume
- You must use an AMI instance that is optimized to use EBS volumes; instance types are listed here
- The first time that you access a block from EBS, there will be approximately 50 percent IOPS overhead. The IOPS measurement assumes that you’ve already accessed the block at least once
With these constraints you can better understand the CloudWatch EBS metrics, such as VolumeReadOps and VolumeWriteOps, and how IOPS are computed. Review these metrics in light of the EBS volume type that you are using and see if you are approaching a limit. If you are approaching a limit then you will want to opt for a higher IOPS supported volume.
Figure 1 shows the VolumeReadOps and VolumeWriteOps for an EBS volume. From this example we can see that this particular EBS volume is experiencing about 2400 IOPS.
Figure 1. Measuring EBS IOPS
Published at DZone with permission of Saba Anees, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.