What Makes A Database Mature?
There are a lot of factors that go into making a database mature, including its community of developers, how it handles queries, and other factors.
Join the DZone community and get the full member experience.Join For Free
many database vendors would like me to take a look at their products and consider adopting them for all sorts of purposes. often they’re pitching something quite new and unproven as a replacement for mature, boring technology i’m using happily.
i would consider a new and unproven technology, and i often have. as i’ve written previously, though, a real evaluation takes a lot of effort , and that makes most evaluations non-starters.
perhaps the most important thing i’m considering is whether the product is mature. there are different levels of maturity, naturally, but i want to understand whether it’s mature enough for me to take a look at it. and in that spirit, it’s worth understanding what makes a database mature.
for my purposes, maturity really means demonstrated capability and quality with a lot of thought given to all the little things . the database needs to demonstrate the ability to solve specific problems well and with high quality. sometimes this comes from customers, sometimes from a large user community (who may not be customers).
here are some things i’ll consider when thinking about a database, in no particular order.
- what problem do i have? it’s easy to fixate on a technology and start thinking about how awesome it is. some databases are just easy to fall in love with, to be frank. riak is in this category. i get really excited about the features and capabilities, the elegance. i start thinking of all the things i could do with riak. but now i’m putting the cart before the horse. i need to think about my problems first.
does it offer sophisticated execution models to handle the nuances of real-world queries? if not, i’ll likely run into queries that run much more slowly than they should, or that have to be pulled into application code. mysql has lots of examples of this. queries such as
order bywith a
limitclause, which are super-common for web workloads, did way more work than they needed to in older versions of mysql. (it’s better now, but the scars remain in my mind).
- query flexibility. the downside of a sophisticated execution engine with smart plans is they can go very wrong. one of the things people like about nosql is the direct, explicit nature of queries, where an optimizer can’t be too clever for its own good and cause a catastrophe. a database needs to make up its mind: if it’s simple and direct, ok. if it’s going to be smart, the bar is very high. a lot of nosql databases that offer some kind of “map-reduce” query capability fall into the middle ground here: key-value works great, but the map-reduce capability is far from optimal.
- data protection. everything fails, even things you never think about. does it automatically check for and guard against bit rot, bad memory, partial page writes, and the like? what happens if data gets corrupted? how does it behave?
- backups. how do you back up your data? can you do it online, without interrupting the running database? does it require proprietary tools? if you can do it with standard unix tools, there’s infinitely more flexibility. can you do partial/selective backups? differential backups since the last backup?
- restores. how do you restore data? can you do it online, without taking the database down? can you restore data in ways you didn’t plan for when taking the backup? for example, if you took a full backup, can you efficiently restore just a specific portion of the data?
- replication. what is the model—synchronous, async, partial, blend? statement-based, change-based, log-based, or something else? how flexible is it? can you do things like apply intensive jobs (schema changes, big migrations) to a replica and then trade master-and-replica? can you filter and delay and fidget with replication all different ways? can you write to replicas? can you chain replication? replication flexibility is an absolutely killer feature. operating a database at scale is very hard with inflexible replication. can you do multi-source replication? if replication breaks, what happens? how do you recover it? do you have to rebuild replicas from scratch? lack of replication flexibility and operability is still one of the major pain points in postgresql today. of course, mysql’s replication provides a lot of that flexibility, but historically it didn’t work reliably, and gave users a huge foot-gun. i’m not saying either is best, just that replication is hard but necessary.
- write stalls. almost every new database i’ve seen in my career, and a lot of old ones, has had some kind of write stalls. databases are very hard to create, and typically it takes 5-10 years to fix these problems if they aren’t precluded from the start (which they rarely are). if you don’t talk about write stalls in your database in great detail, i’m probably going to assume you are sweeping them under the rug or haven’t gone looking for them. if you show me you’ve gone looking for them and either show that they’re contained or that you’ve solved them, that’s better.
- independent evaluations. if you’re a solution in the mysql space, for example, you’re not really serious about selling until you’ve hired percona to do evaluations and write up the results. in other database communities, i’d look for some similar kind of objective benchmarking and evaluations.
- operational documentation. how good is your documentation? how complete? when i was at percona and we released xtrabackup, it was clearly a game-changer, except that there was no documentation for a long time, and this hurt adoption badly. only a few people could understand how it worked. there were only a few people inside of percona who knew how to set it up and operate it, for that matter. this is a serious problem for potential adopters. the docs need to explain important topics like daily operations, what the database is good at, what weak points it has, and how to accomplish a lot of common tasks with it. riak’s documentation is fantastic in this regard. so is mysql’s and postgresql’s.
- conceptual documentation. how does it work, really? one database that i think has been hurt a little bit by not really explaining how-it-works is nuodb, which used an analogy of a flock of birds all working together. it’s a great analogy, but it needs to be used only to set up a frame of reference for a deep-dive, rather than as a pat answer. (perhaps somewhat unfairly, i’m writing this offline, and not looking to see if nuodb has solved this issue i remember from years ago.) another example was tokudb’s fractal tree indexes. for a long time it was difficult to understand exactly what fractal tree indexes really did. i can understand why, and i’ve been guilty of the same thing, but i wasn’t selling a database. people really want to feel sure they understand how it works before they’ll entrust it with their data, or even give it a deep look. engineers, in particular, will need to be convinced that the database is architected to achieve its claimed benefits.
- high availability. some databases are built for ha, and those need to have a really clear story around how they achieve it. walk by the booth of most new database vendors at a conference and ask them how their automatically ha solution works, and they’ll tell you it’s elegantly architected for zero downtime and seamless replacement of failed nodes and so on. but as we know, these are really hard problems. ask them about their competition, and they’ll say “sure, they claim the same stuff, but our code actually works in failure scenarios, and theirs doesn’t.” they can’t all be right.
- monitoring. what does the database tell me about itself? what can i observe externally? most new or emerging databases are basically black boxes. this makes them very hard to operate in real production scenarios. most people building databases don’t seem to know what a good set of monitoring capabilities even looks like. memsql is a notable exception, as is datastax enterprise. as an aside, the astonishing variety of opensource databases that are not monitorable in a useful way is why i founded vividcortex.
- tooling. it can take a long time for a database’s toolbox to become robust and sophisticated enough to really support most of the day-to-day development and operational duties. good tools for supporting the trickier emergency scenarios often take much longer. (witness the situation with mysql ha tools after 20 years, for example.) similarly, established databases often offer rich suites of tools for integrating with popular ides like visual studio, spreadsheets and bi tools, migration tools, bulk import and export, and the like.
- client libraries. connecting to a database from your language of choice, using idiomatic code in that language, is a big deal. when we adopted kafka at vividcortex, it was tough for us because the client libraries at the time were basically only mature for java users. fortunately, shopify had open-sourced their kafka libraries for go, but unfortunately they weren’t mature yet.
- third-party offerings. sometimes people seem to think that third-party providers are exclusively the realm of open-source databases, where third parties are on equal footing with the parent company, but i don’t think this is true. both microsoft and oracle have enormous surrounding ecosystems of companies providing alternatives for practically everything you could wish, except for making source code changes to the database itself. if i have only one vendor to help me with consulting, support, and other professional services, it’s a dubious proposition. especially if it’s a small team that might not have the resources to help me when i need it most.
the most important thing when considering a database, though, is success stories. the world is different from a few decades ago, when the good databases were all proprietary and nobody knew how they did their magic, so proofs of concept were a key sales tactic. now, most new databases are opensource and the users either understand how they work, or rest easy in the knowledge that they can find out if they want. and most are adopted at a ratio of hundreds of non-paying users for each paying customer. those non-paying users are a challenge for a company in many ways, but at least they’re vouching for the solution.
success stories and a community of users go together. if i can choose from a magical database that claims to solve all kinds of problems perfectly, versus one that has broad adoption and lots of discussions i can google, i’m not going to take a hard look at the former. i want to read online about use cases, scaling challenges met and solved, sharp edges, scripts, tweaks, tips and tricks. i want a lot of stack exchange discussions and blog posts. i want to see people using the database for workloads that look similar to mine, as well as different workloads, and i want to hear what’s good and bad about it. (honest marketing helps a lot with this, by the way. if the company’s own claims match bloggers’ claims, a smaller corpus online is more credible as a result.)
these kinds of dynamics help explain why most of the fast-growing emerging databases are opensource. opensource has an automatic advantage because of free users vouching for the product. why would i ever consider a proof-of-concept to do a sales team a favor, at great cost and effort to myself, when i could use an alternative database that’s opensource and has an active community discussing the database? in this environment, the proof of concept selling model is basically obsolete for the mass market. it may still work for specialized applications where you’ll sell a smaller number of very pricey deals, but it doesn’t work in the market of which i’m a part.
in fact, i’ve never responded positively to an invitation to set up a poc for a vendor (or even to provide data for them to do it). it’s automatically above my threshold of effort. i know that no matter what, it’s going to involve a huge amount of time and effort from me or my teams.
there’s another edge-case—databases that are built in-house at a specific company and then are kicked out of the nest, so to speak. this is how cassandra got started, and kafka too. but the difference between a database that works internally for a company (no matter how well it works for them) and one that’s ready for mass adoption is huge , and you can see that easily in both of those examples. i suspect few people have that experience to point to, but probably a lot of readers have released some nifty code sample as open-source and seen how different it is to create an internal-use library, as opposed to one that’ll be adopted by thousands or more people.
remarkably few people at database companies seem to understand the things i’ve written about above. the ones who do—and i’ve named some of them—might have great success as a result. the companies who aren’t run by people who have actually operated databases in their target markets recently, will probably have a much harder time of it.
i don’t make much time to coach companies on how they should approach me. it’s not my problem, and i feel no guilt saying no without explanation. (one of my favorite phrases is “no is a complete sentence.”) but enough companies have asked me, and i have enough friends at these companies, that i thought it would be helpful to write this up. hopefully this serves its intended purpose and doesn’t hurt any feelings. please use the comments to let me know if i can improve this post.
Published at DZone with permission of Baron Schwartz, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.