Join the DZone community and get the full member experience.Join For Free
the nosql moniker that was coined circa 2009 marked a move from the “traditional” relational model. there were quite a few non-relational databases around prior to 2009, but in the last few years we’ve seen an explosion of new offerings (you can see,for example, the “nosql landscape” in a previous post i made ). generally speaking, and everything here is a wild generalization, since not all solutions are created equal and there are many types of solutions – nosql solutions mostly means some relaxation of acid constraints, and, as the name implies, the removal of the “structured query language” (sql) both as a data definition language, and more importantly, as a data manipulation language, in particular sql’s query capabilities.
acid and sql are a lot to lose and nosql solutions offer a few benefits to augment them mainly:
- scalability – either as relative scalability, meaning scale cheaper than a comparable rdbms at same scale point; or absolutely – as in scale better than an rdbms can. scalability is usually achieved by preferring partition tolerance over consistency in eric brewer’s cap theorem and relying on “eventual consistency” (more on this later)
- simpler models – i.e. the mapping of programming structures to storage structure is straight forward and thus avoid the whole “object/relations mapping quagmire” (or as ted neward called it ” vietnam of computer science” ). i have to say that in my experience this is only a half truth as it only holds to a point and when you need to scale and/or have high-performance requirements you need to carefully design your schemas and it isn’t always “simple”.
- late binding schemas – this is a real flexibility boon as you can store data in forms that are close to the origin form and apply the schemas on read so you can deliver poly-strctured data and handle semi-structured data easily.
eventual consistency and simple query mechanisms can work for a while and some use cases but as adoption of nosql solutions got more widespread we can see that markets needs more.
eventual consistency means that if new updates stop flowing in after a while all reads will return the last updated value – as new updates rarely stops and as “after a while” is not well defined – this is a rather weak guarantee and we see some efforts to make stronger guarantees. peter bailis and ali ghodsi, published a good paper called “ eventual consistency today: limitations, extensions, and beyond ” where they go over some of the options. the nosql landscape is too wide to say this is happens everywhere but some solutions move in this direction, for example, in hbase (the nosql i’ve used most in the past few years) we’ve seen the addition of “multi-version concurrency control” which provide acid guarantees for single row operations (which can be tuned down for performance)
nevertheless, providing real guarantees under real conditions can prove to be rather tricky. i highly recommend reading kyle kingsbury series of great posts on jepsen where he looks at how postgres, mongodb, redis and riak handle writes under network partitioning.
when we look at the nosql space we see that a lot of the technologies get better, more advance query languages e.g. mongodb find some nice features ; cassandra’s query language is at its third version but one technology where introducing queries in general and sql specifically is becoming form a trend into a stampede is hadoop. hadoop has a multi-vendor, multi-distro ecosystem (not unlike linux) and it seems each and everyone of them wants to introduce its own sql solution : cloudera offers impala , hortonworks is working on stinger initiative to enhance hive, pivotal (nee emc greenplum) has hawq , ibm is working on bigsql and even salesforce.com (which does not offer a distro) offers an sql skin for hbase called phoenix . the last hadoop summit had a panel where some of these players debated the merits of their respective platforms which is worth listening to
the examples i’ve given above are mainly around hadoop – naturally, as this is the environment i’ve been working with i am more familiar with it, but more importantly it seems hadoop has managed to place itself as the main nosql, large scale (a.k.a. big data) solution and as such this resql trend is more apparent there and it will (and it does) also affect other nosql offerings.
the thing is that nosql dropped sql capabilities for simplicity – wider adoption draws all the capabilities and complexity back,i guess the main problem is that the situation is even more complicated when we’re also dealing with big data and its implications (e.g. late binding schema vs. the schema needs for the *structured* query language; immovable or hard to move data vs joins etc.)
Published at DZone with permission of Arnon Rotem-gal-oz, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Managing Data Residency, the Demo
What I Learned From Crawling 100+ Websites
AWS Multi-Region Resiliency Aurora MySQL Global DB With Headless Clusters
Does the OCP Exam Still Make Sense?