Researching NoSQL Is No Different Than The Early RDBMS Market
How quickly we forget what the dark ages of RDBMS was like. Researching NoSQL solutions is not much different than researching RDBMS software in the mid-90s. Each solution had their problems, some of them severe, and most of the problems would be fixed within a few short release cycles. So, the current problem is how to research appropriate solutions for such an immature market like the NoSQL solutions.
First, you need to figure out whether NoSQL fits your problem:
- Do your data needs include massive scalability in a system that supports a highly distributed set of nodes resilient to network outages and inconsistency? Then you might consider NoSql alternatives.
- Do you need a DBMS that doesn’t try to force you into static data schemas? Do you need the utmost of data model flexibility? Then you might consider NoSql alternatives.
- Is your database going to be filled with a truly massive number of objects with very simply query requirements? Is throughput an absolute priority down to the milliseconds? Then you might consider NoSql alternatives.
- Is your database going to be hit with a truly fantastic amount of write activity? Then you might consider NoSql alternatives.
- Would your solution benefit greatly from the utmost in maintenance and administration simplicity? Then you might consider NoSql alternatives.
If you talk to any traditional RDBMS developer, they will say that none of these points really require a NoSQL solution or that some of the issues are more related to design problems instead of data storage issues. Those talking points aside, there is some reason that these types of issues keep coming up. Traditional RDBMS systems that are being used at scale require significant administration and an administrator who is very good at their job. One of the reasons that people are using NoSQL solutions in these scenarios is that you do not need to be an experienced Oracle cluster DBA, you can just be a well-read software engineer.
One difference between RDBMS and NoSQL solutions is that you really need to understand how you are going to use the database. For an RDBMS, this was not a concern because they are meant as general data storage. The various NoSQL solutions have different purposes. Document stores are different than object stores which are different than the other storage models. The topic of choosing the appropriate type of solution is worthy of its own blog post, so I won’t cover it here. The important thing to note is that each storage model has advantages and disadvantages, so choosing the right solution can make your implementation much smoother.
However, there are other things you need to think about when researching NoSQL solutions. NoSQL is an immature market, so there are other issues that you need to be aware of. First, there is the anonymous rant about MongoDB that has been heavily talked about this week. The following is a condensed version of the issues in the rant:
- MongoDB issues writes in unsafe ways by default in order to win benchmarks: If you don’t issue getLastError(), MongoDB doesn’t wait for any confirmation from the database that the command was processed.
- MongoDB can lose data in many startling ways: Recovery on corrupt database was not successful, pre transaction log. Replication between master and slave had gaps in the oplogs, causing slaves to be missing records the master had. Yes, there is no checksum, and yes, the replication status had the slaves current
- MongoDB requires a global write lock to issue any write
- MongoDB’s sharding doesn’t work that well under load: Adding a shard under heavy load is a nightmare. Mongo either moves chunks between shards so quickly it DOSes the production traffic, or refuses to more chunks altogether.
- mongos is unreliable: The mongod/config server/mongos architecture is actually pretty reasonable and clever. Unfortunately, mongos is complete garbage. Under load, it crashed anywhere from every few hours to every few days.
- MongoDB actually once deleted the entire dataset: MongoDB 1.6, in replica set configuration, would sometimes determine the wrong node (often an empty node) was the freshest copy of the data available. It would then DELETE ALL THE DATA ON THE REPLICA (which may have been the 700GB of good data) AND REPLICATE THE EMPTY SET… They fixed this in 1.8.
- Things were shipped that should have never been shipped: Things with known, embarrassing bugs that could cause data problems were in “stable” releases—and often we weren’t told about these issues until after they bit us… The response was to send up a hot patch and that they were calling an RC internally, and then run that on our data.
- Replication was lackluster on busy servers: Replication would often, again, either DOS the master, or replicate so slowly that it would take far too long and the oplog would be exhausted (even with a 50G oplog).
Obviously, this is a long list of problems, but you will notice that the version numbers are varied, while MongoDB v2.0.1 has already been released and v1.8 was originally released in February 2011, which points out that some of this information could just be old. More than anything, the post seems like a list of issues over several versions. And that is part of the problem when reading posts like this. If you try to think what Oracle was like prior to v7, you probably don’t remember, but if you ask an experienced DBA they can tell you horror stories. Things were not all that rosy when RDBMS systems were trying to mature.
This is also the nature of software, it has defects and typically
severe defects when the software is first released. NoSQL is still
leading edge technology, so you have to assume you are taking some risks
when you try to implement your storage solution with one of those
products. To put this into more obvious terms, one of the most
successful pieces of software sucked until it got past v3.1 and that was