Addressing the NoSQL Criticism
Join the DZone community and get the full member experience.Join For Free
there were quite a few nosql critics at oscon this year. i imagine this was true of past years as well, but i don’t know that first hand. i think there are several reasons behind the general disdain for nosql databases.
first, nosql is horrible name. it implies that there’s something wrong with sql and it needs to be replaced with a newer and better technology. if you have structured data that needs to be queried, you should probably use a database that enforces a schema and implements structured query language. i’ve heard people start redefining nosql as “not only sql”. this is a much better definition and doesn’t antagonize those who use existing sql databases. an sql database isn’t always the right tool for the job and nosql databases give us some other options.
second, there are way too many different types of databases that are categorized as nosql. there are document-oriented databases, key/value stores, graph databases, column-oriented databases, in-memory databases, and other database types. there are also databases that combine two or more of these properties. it’s easy to criticize something that is vague and loosely defined. as the nosql space matures, we’ll start to get some more specific definitions, which will be much more helpful.
third, at least one very popular vendor in the nosql space has a history of making irresponsible claims about their database’s capabilities. antony falco of basho (makers of riak) has a great blog post on the topic: see it’s time to drop the “f” bomb – or “lies, damn lies, and nosql.” if you care about your data, please read tony’s blog post. it’s unfortunate that the specious claims of a few end up making everyone in the nosql space look bad.
i also want to address some of the specific criticisms that i’ve heard of nosql, as they apply (or don’t apply) to couchdb (i’m not familiar enough with other nosql databases to talk about those).
sql databases are more mature
this is absolutely true. if you pick a nosql database, you should do your homework and make sure that your database of choice truly respects the fact that writing a reliable database is a very difficult task. most of the nosql databases take the problem very seriously, and try to learn from those that have come before them. but why create a new type of database in the first place? because an sql database is not the right solution to every problem. when all you have is a schema, everything looks like a join. the data model in couchdb (json documents) is a great fit for many web applications.
sql scales just fine
this is also true. if you’re picking a nosql database because it “scales”, you’re likely doing it wrong. scaling is typically more aspiration than reality. there are many other factors to consider and questions to ask when choosing a database technology other than, “does it scale?” if you do actually have to scale, then your database isn’t going to magically do it for you. you can’t abstract scaling problems to your database layer. however, i will say that many nosql databases have properties (such as eventually consistency) that will make scaling easier and more intuitive. for example, it’s dead simple to replicate data between couchdb databases.
atomicity, consistency, isolation, and durability (acid)
couchdb is acid compliant. within a couchdb server, for a single document update, couchdb has the properties of atomicity, consistency, isolation, and durability (acid). no, you can’t have transactions across document boundaries. no, you can’t have transactions across multiple servers (although bigcouch does have quorum reads and writes). not all nosql databases are durable (at least with default settings).
if you want the best possible guarantee of durability, you can change couchdb’s delayed_commits configuration option from true (the default) to false. basically, this will cause couchdb to do an explicit fsync after each operation (which is very expensive and slow). note that operating systems, virtual machines, and hard drives often lie about fsync, so you really need to research more about how your particular system works if you’re concerned about durability. if you think your write speeds are too good to be true, they probably are.
if you leave delayed commits on, couchdb has the option of setting a batch=ok parameter when creating or updating a document. this will queue up batches of documents in memory and write them to disk when a predetermined threshold has been reached (or when triggered by the user). in this case, couchdb will respond with an http response code of 202 accepted, rather than the normal 201 created, so that the client is informed about the reduced integrity guarantee.
at least one nosql database requires a consistency check after a crash (guess which one). this can be a very slow process, causing additional downtime. couchdb’s crash-only design and append-only files means that there is no need for consistency checks. there’s no shut down process in couchdb—shutting it down is the same as killing the process.
couchdb’s append-only files do come at a cost. that cost is disk space and the need for compaction. if you don’t compact your database, it will eventually fill up your hard drive. there is no automatic compaction in couchdb. compaction is triggered manually (it can easily be automated through a cron job) and should be done when the database’s write load is not at full capacity.
no ad hoc queries
this is a feature, not a bug. couchdb only lets you query against indexes. this means that queries in couchdb will be extremely fast, even on huge data sets. most web applications have predefined usage patterns and don’t need ad hoc queries. if you need ad hoc queries, say for business intelligence reporting, you can replicate your data (using couchdb’s changes feed) to an sql database.
building indexes is slow
if you have a large number of documents in couchdb, the first build of an index will be very slow. however, each query after that will be very fast. couchdb’s mapreduce is incremental, meaning new or updated documents can be processed without needing to rebuild the entire index. in most scenarios, this means that there will be a small performance hit to process documents that are new or updated since the last time the view was queried. you can optionally include the stale=ok parameter with your query. this will instruct couchdb to not bother processing new or updated documents and just give you a stale result set (which will be faster than processing new or updated documents). as of couchdb 1.1, you can include a stale=update_after parameter with your query. this will return a stale result set, but will trigger an update of the index (if necessary) after your query results are returned, bringing the index up-to-date for future queries by you or other clients.
some say that not having a schema is a problem. sure—if you have structured data, you probably want to enforce a schema. however, not all applications have highly structured data. many web applications work with unstructured data. if you’ve encountered any of the following, you may want to consider a schema-free database:
- you’ve found yourself denormalizing your database to optimize read performance.
- you have rows with lots of null values because many columns only apply to a subset of your rows.
- you find yourself using sql antipatterns such as entity-attribute-value (eav), but can’t find any good alternatives that fit with both your domain and sql.
- you’re experiencing problems related to the object-relational impedance mismatch . this is typically associated with use of an object-relational mapper (orm), but can happen when using other data access patterns as well.
i’ll add that you can enforce schemas in couchdb through the use of document update validation functions .
did i miss anything? what other criticisms exist of nosql databases? please comment and i’ll do my best to address each.
Opinions expressed by DZone contributors are their own.