The Story of Multi-Model Databases
Multi-model databases give you the best of both worlds: you can both use the best features of polyglot persistence concept and minimize its limitations.
Join the DZone community and get the full member experience.Join For Free
the world of databases has changed significantly in the last eight years or so. do you remember the time when word database was equivalent to a relational database? relational databases ruled this niche for more than forty years. and for a good reason. they have strong consistency, transactions, and expressiveness, they are a good integration tool, and so on.
but forty years is a long period of time. a number of things have changed during this time, especially in the technology world. today, we can see that relational databases cannot satisfy every need of today's it world. having fixed database schema, static representation of data and impedance mismatch are just some of the obstacles that users of relational databases faced. that, in turn, gave space for a completely new branch of databases to develop nosql databases .
nosql databases and different data models
the term "nosql" was pretty and it was first tossed around back in 2009. it seems, however, that community is agreeing nowadays that it actually stands for not only sql. also, this term covers a wide range of databases. why is that? well, as relational database model was not a perfect fit for everyone's problems, people started to create different databases with models that can better handle obstacles they were facing. therefore, these databases are very different from each other.
that is how database models became the main distinction between nosql databases and the way they function. today, we can separate a few types of nosql databases based on their database model:
key-value stores : stores data in array with the single key.
column stores : stores data in columned families in column order.
graph stores : use graph structures for queries with nodes, edges, and properties to represent and store data.
document stores : stores data in self-describing structures that are usually similar to each other but don’t have to be the same (documents).
each of these models tackles a different kind of problem, making them good for some solutions and bad for the others. for example, document databases are good storing unstructured data, but not as good at storing relational data such as graph databases. but this distinction between models is also the secret that enabled nosql boom to happen. users became more aware of data and its nature. in this way, nosql databases world gave us the ability to choose the best data model for our solution.
however, what if you work on a large system that has a lot of different parts with numerous different problems to handle? there is also a great deal of different kind of data tossed around in that system, and more importantly, the nature of that data is different in different sections of the system. which database — or to be more precise, which database model — should be used? that is how polyglot persistence emerged.
what this essentially means is that you should use multiple databases against a single backend. each chunk of the system would use a different database which best fits its needs. part of the system that handles structured data would use relational database model, while the part that works with unstructured, object-like data would use the document database mode, and the part that deals with analytical data would use a column database model, and so on.
this, of course, is easier said than done. assuring that a project with many databases is fault-tolerant is challenging, to say the least. apart from the increased code complexity, data consistency and data duplication become frequent issues. deployment becomes more complicated and frequent, too. in addition, synchronization of these databases is an issue that cannot be overlooked. for example, if you want to backup data at a certain moment in time, this can be an issue because every database needs a different amount of time to backup.
hence, the polyglot persistence was a nice idea that had to evolve. what multi-model database tries to address are exactly the problems that we face with the polyglot persistence concept.
what is the idea behind multi-model databases? multi-model databases are trying to incorporate different database models into a single incorporated engine. this engine should be able to use unified querying language and expose a single api that will have the ability to be used on all database models. personally, this was a tough pill to swallow at first. let’s briefly explain how multi-model databases are able to map information from one data model to another.
the main concept is to keep all data in a single data model and then represent other models by mapping the higher-level models to a lower-level representation. for example, let’s say that we have three models in a multi-model database: document, key-value, and graph. graphs can be mapped in document database model by creating a separate collection for vertexes and separate collection for edges.
documents in document databases usually have a unique identifier for each document. this way, this can be mapped on key-value stores, where the key would be the document’s unique identifier and the value is the whole document value. one can see how those relational databases can be mapped to key-value model, too. therefore, the lowest level of representation is key-value structure, and all other models can be mapped to it. once this is established, one can easily create query language on top of that.
these features, of course, needed to lay on top of a highly performant multi-key acid transactions, and in a way that they retain the advantages of nosql scalability and fault tolerance. did i just describe the perfect database? one that gives you the ability to change the data model without having to sacrifice performance or scalability. that is the dream, indeed.
a multi-model database allows us to both use the best features of polyglot persistence concept and minimize its limitations. now, we can create complex systems that use multiple database models and use a single engine to achieve that. that way, the complexity of development, operations, and deployment are minimized.
read more from the author on rubik's code .
Opinions expressed by DZone contributors are their own.