MongoDB Atlas: Cloud and Data Protection Requirements
MongoDB introduces Atlas, its database-as-a-service platform. Come check out what it has to offer!
Join the DZone community and get the full member experience.Join For Free
Atlas, holding up your databases in the cloud. Not sure if that’s what they were going for, but I personally like the analogy.
Last week at MongoDB World'16, MongoDB announced Atlas, a database-as-a-service offering of the company's popular open-source NoSQL database. Rather than dealing with the complexities of setting up, provisioning, and configuring a MongoDB cluster yourself, Atlas allows companies to leave all those details for the masters of MongoDB to manage, freeing up application developers to focus on more important things (like application development). Atlas currently is available on Amazon AWS. There are plans to expand it to other public cloud platforms such as Microsoft Azure and Google Cloud Platform.
Depending on the needs of your application, Atlas allows you to scale your cluster up or down, both vertically (with more powerful servers) and horizontally (with more shards in your cluster). You can also provision more disk space as your data grows. Simply tell Atlas and it will automatically update your cluster in the cloud to match your specifications. The documentation is unclear whether Atlas leverages AWS' autoscaling features, but it appears that scaling must be done manually.
Here at Datos IO, I work on RecoverX for MongoDB, our backup and recovery software platform for scale-out databases, so I was excited to get access to Atlas (along with the company credit card!) and test it out. Here's my experience with Atlas and initial thoughts about it.
Bringing up a MongoDB Cluster
When you first log into Atlas, you're met with a pretty simple UI. The very first thing you can do is to start up a new cluster in the cloud.
Before deploying a cluster, you need to know the AWS region to place the cluster, AWS instance size, amount of disk space needed, whether to use encrypted storage, whether to enable backups, the speed of the underlying storage (i.e. number of IOPS), and finally the number of shards and replica set members.
You first specify the type of AWS instance you want to run your cluster on (ranging from M10 to M100) and how much disk space you want to provision, anywhere from 10GB to 16TB. All instances can be configured with encrypted storage with no additional cost. I first started by selecting the default settings from the M10 instance which also happens to be the cheapest.
After specifying your instance, you can then configure the number of nodes in your cluster. For replica sets, you can have either three, five, or seven members. Additionally, you can specify whether the cluster should be sharded and the number of shards in the cluster. Note that sharded clusters requires an AWS instance that is M50 or higher, which is one of the more expensive instances, so the jump from a basic unsharded cluster to sharded cluster will be fairly expensive!
The cheapest sharded cluster is 56 times more expensive than the cheapest unsharded cluster. You probably don’t want to be using Atlas for any sharded testing clusters if you can help it.
After you're satisfied with your cluster configuration, you can just click "Confirm & Deploy." My cluster was spun up within a couple minutes, but your mileage may vary. After adding the IP addresses from which you'll access the Atlas instances to a security whitelist, you can begin using your new mongo cluster. Something small that I really appreciated about Atlas is that it gives you the exact command and hostnames you should use to connect to your cluster (both through the mongo shell as well as through MongoDB's many drivers). It makes connecting to your cluster incredibly simple. Just copy and paste.
Atlas gives you high-level stats of your cluster, including IOPS, read and write performance, disk usage, etc. You can also get more detailed metrics for each node.
Connecting to your cluster made easy! Just download an updated mongo's binary and then copy and paste.
After creating your cluster, it is fairly simple to scale your clusters horizontally by adding more shards. For me, the whole process of upgrading my 1×3 to a 2×3 cluster took around 10 minutes (again, your mileage may vary). Pretty slick! In this process, Atlas will automatically provision more servers and configure your cluster with new config servers, mongos, and shards. Notably, once you decide to shard a cluster, you are unable to go back down to an unsharded instance, so make sure that you're absolutely sure when you decide to scale up to a sharded cluster; you won’t be able to go back!
Overall, outside of a few unsupported cases, the process of setting up a cluster and manually scaling it was quite smooth. While what has been achieved thus far in reducing cluster configuration to a few tunable options is definitely a step in the right direction in simplifying cluster setup for the user, Atlas can do much more. Compare with other existing cloud database services, such as Amazon's DynamoDB or Microsoft's DocumentDB, where all you need to specify is the IOPS requirement of your application (application IOPS, not storage IOPS as it is in Atlas). In these competing cloud databases, issues such as replication, number of shards, and high availability are hidden away from the user. Granted, DocumentDB has already been out for a year and DynamoDB many years before that, so it may just be a matter of time before Atlas catches up. Regardless, there is a lot more that Atlas can do to be truly serverless and even simpler to use — a solid first step, but lot more needs to be developed to compete with cloud databases and to win enterprise customers.
Under the Covers: Backup and Recovery
Now, because I work on something similar here at Datos IO, I was quite interested in Atlas' backup and recovery options. At a first glance, it looks similar to the backup and recovery services provided by MongoDB's earlier products, such as CloudManager and OpsManager. After playing around with the feature, I'm more convinced that the backup technology that Atlas uses is essentially the same as previous offerings, only that it's applied to Atlas instances.
Backups are free for the first 1GB and $2.50 per GB per month for every GB after that. Backups can be scheduled at set intervals but with a minimum interval of six hours. While I was initially put off by the minimum snapshot interval (after all, a LOT can happen in six hours!), after some investigation, Atlas also allows you to schedule "checkpoints" in between snapshots. Checkpoints can be used as restore points, but they take longer to restore because it requires applying all deltas since the last snapshot. It's important to note that this feature is only available for sharded clusters. If you want to protect data on an unsharded cluster, you'll be stuck with a minimum snapshot interval of six hours.
Backup can be set when by toggling a flag during cluster initialization. During restore time, you can select a specific snapshot to restore. If you need a finer granularity of restore, Atlas also provides point-in-time recovery, which is pretty neat. You can specify a time between two snapshots to restore at a minute-level granularity. If you need an even finer granularity, you can specify the specific oplog timestamp and interval you want to restore.
As I did a couple of backups and restores, I developed a couple reservations about the feature. Most of my qualms arose from the fact that Atlas treats the entire cluster as its "unit" of backup. What I mean by this is that backups created by Atlas are snapshots of the entire cluster. So there is no granularity at all and this is problematic in a few ways.
First of all, since Atlas takes snapshots of the entire cluster, you can't control what data you want to backup. Say for example only a small portion of your database contains critical sensitive information that needs to be backed up. With Atlas, in order to backup your critical data, you'll have to backup all your noncritical data as well! Considering that the cost of backup increases with the size of your data, you’ll end up unnecessarily paying a lot for backups of data you don't really care about.
Moreover, your options when recovering your backups becomes constrained by the fact that backups are at the cluster-level. By nature of being cluster-level snapshots, backups must restore back the entire cluster; you can't restore back a specific database or collection. This can be problematic when say only one of your collections in your production cluster becomes corrupted. In that case, Atlas takes a very heavy-handed approach of bringing down the entire cluster and resetting every collection back to the point of the snapshot, as opposed to only restoring the problematic collection.
Lastly, Atlas only allows you to restore snapshots to clusters with the same number of shards as the snapshot. For example, if you have a cluster with three shards, you can only restore to other clusters with three shards. One potential reason you may want to restore data to a cluster with a different topology is for testing purposes. Perhaps you want to measure application performance as you increase the number of shards in your cluster, or perhaps you just need a small, cheap cluster for basic tests. If you want to restore production data to a test cluster in Atlas, you'll either have to spin up cluster with the same topology as the production cluster (which is good for MongoDB, but sad for your wallet), or you'll have to manually download from Atlas and then manually restore that data back to Atlas into a cluster of different topology. Both cases work, but are not ideal.
Atlas is a pretty good move by MongoDB to make it even easier to build up applications on top of its database by abstracting out the mundane details of cluster configuration to the people who know MongoDB best. For me, Atlas was a simple and fast way to create and scale up MongoDB clusters. While a good step in the right direction, Atlas can do more to make it a truly serverless approach like other cloud databases in the market. Here at Datos IO, we are on a sole mission to bring MongoDB as a defacto standard for enterprises and are working hard to complement with next-generation data protection capabilities.
Published at DZone with permission of , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.