Virtualizing MongoDB on Amazon EC2 and GCE: Part 2
Virtualizing MongoDB on Amazon EC2 and GCE: Part 2
Join the DZone community and get the full member experience.Join For Free
Get the Edge with a Professional Java IDE. 30-day free trial.
In part one of this series, we introduced David Mytton, CTO of Server Density, and discussed his pros and cons of virtualizing infrastructure in public clouds. In part 2, we finish with specific practices for virtualizing MongoDB in two popular public clouds, Amazon EC2 and Google Compute Engine.
To perform well, databases have system requirements that were initially a challenge for cloud providers. Databases don’t generally need a whole lot of CPU, what they want is as much RAM as you can give them, and low IO latency for many small reads and writes. While RAM in the cloud is performant and readily available, low-latency IO is a challenge, due to limitations of the underlying hardware typically used for VMs.
Many teams trying out database systems such as MySQL or MongoDB ran afoul of these issues at first. Thankfully, these challenges have since been overcome as providers improved their offerings to meet them, and today it is no problem for Server Density to handle their massive MongoDB installation completely in the cloud.
General Best Practices
Every MongoDB operator should be aware of the guidance found in the MongoDB Production Notes. These practices are not particular to virtualized instances of MongoDB, however configuration particulars, such as disabling atime and setting good readahead values are more important to get right from the get-go, as any sub-optimal setup will be exacerbated by a virtual environment. Additionally, there are considerations that are particular to virtualized environments. For example, when using linux and virtual block devices, such as EC2’s Elastic Block Store and GCE’s Persistent Disk, the noop IO scheduler should be used, to allow the underlying hypervisor to handle the scheduling. For more, see the Production Notes section on Virtual Environments and Performance Best Practices for MongoDB.
EC2 Specific Optimizations
For many enterprises, using MongoDB Management Service (MMS) is a great way to get started with MongoDB on EC2. By doing so you can launch instances at the push of a button which are already set up with all the best practices, so you can be assured you’re not overlooking any of them. Those teams that want a config more tailored for their needs can do so, taking heed of these tips:
- Only use the newer instance types.
- Use instance types optimized for memory (r3) or storage (i2), as MongoDB will generally not be CPU-bound. Only use a CPU optimized instance if your prototyping shows your app is one of those cases where CPU bottlenecking is a concern. (Note that when MongoDB 2.8 is released, deployments using the new WiredTiger storage engine will also see greater CPU utilization.)
- Use EBS optimized instances, and use provisioned IOPS. This is the single most important detail to get right on EC2. Without using provisioned IOPS, you will experience unpredictable latency on IO, a deal-breaker for almost any database.
- In some rare cases, you may be able to tolerate some amount of latency spikes on a small database. In those cases, the Generic SSD option is cheaper than the Provisioned IOPS option, but this will require validation.
- Split out the log file, journal file, and data directories onto separate volumes. Each of these volumes can have their size and IOPS provisioned individually to suit their needs.
- To achieve the highest possible throughput, set up volumes as RAID–10. AWS recommends this as a redundancy measure as well, but David points out that the amount of time to rebuild a RAID, and the performance impact in production during that rebuild, makes it more advantageous to just spin up a new instance and add it as a new replica set member.
The MongoDB documentation guide to Deploying MongoDB to EC2 contains more detail, as well as a walkthrough of setting up a MongoDB instance on EC2.
GCE Specific Optimizations
Google Compute Engine differs significantly from EC2, in this case most importantly with respect to their IO subsystems. With GCE, both RAIDing and separating log, journal, and data files onto their own volumes is unnecessary, and in fact negatively impacts performance. This is because GCE Persistent Disks are implemented as stripes across many physical disks already, making RAIDing redundant. This is also why volume size translates directly into available IOPS.
To get the most out of MongoDB on GCE, follow these guidelines:
- Use the high memory machine types, and make your choice of which one based on the memory footprint you need.
- Volume size is directly correlated to the IOPS available, making it very easy to understand the performance you can expect. Always keep in mind that the size you allocate must be the greater of the size you need to store your data and the size you need to guarantee the IOPS performance you need. Refer to the documentation for Compute Engine Disks for an extensive discussion of the performance characteristics of Persistent Disks, as well as several useful examples.
- Virtual Machines have IOPS limits themselves, detailed on the GCE Disks documentation.
- As explained above, don’t use RAIDs!
The MongoDB documentation guide to Deploying MongoDB to GCE contains more detail, as well as a walkthrough of setting up a MongoDB instance on GCE. Also, download our operations whitepaper for best practices on deploying and managing a MongoDB cluster:
Published at DZone with permission of Francesca Krihely , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.