The Tractor Beam of the Database in an API World
The Tractor Beam of the Database in an API World
In spite of all the advances in APIs, databases still seem to hold the greatest pull over what is and isn't possible with them. Take a look at why that is.
Join the DZone community and get the full member experience.Join For Free
Running out of memory? Learn how Redis Enterprise enables large dataset analysis with the highest throughput and lowest latency while reducing costs over 75%!
I’m an old database person. I’ve been working with databases since my first job in 1987. Cobol. FoxPro. SQL Server. MySQL. I have had a production database in my charge accessible via the web since 1998. I understand how databases are the center of gravity when it comes to data — it's something that hasn’t changed in an API-driven world. This is something that will make microservices in a containerized landscape much harder than some developers will want to admit. The tractor beam of the database will not give up control to data so easily — either because of technical limitations, business constraints, or political gravity.
Databases are all about storage and access to data. And APIs are about access to data. They're about storage, and the control that surrounds it is what creates the tractor beam. Most of the reasons for control over the storage of data are not looking to do harm. Security. Privacy. Value. Quality. Availability. There are many reasons stewards of data want to control who can access data, and what they can do with it. However, once control over data is established, I find it often morphs and evolves in many ways, that it can eventually become harmful to meaningful and beneficial access to data — which is usually the goal behind doing APIs, but is often seen as a threat to the mission of data stewards, and results in a tractor beam that API-related projects will find themselves caught up in and find difficult to ever break free of.
The most obvious representation of this tractor beam is that all data retrieved via an API usually comes from a central database. Also, all data generated or posted via an API also ends up within a database. The central database always has an appetite for more data, whether scaled horizontally or vertically. Next, it is always difficult to break off subsets of data into separate API-driven projects or to prevent newly established ones from being pulled in and made part of existing database operations. Whether due to technical, business, or political reasons, many projects born outside this tractor beam will eventually be pulled into the orbit of legacy data operations. Keeping projects decoupled will always be difficult when your central databases have so much pull when it comes to how data is stored and accessed. This isn’t just a technical decoupling; this is a cultural one that will be much more difficult to break from.
Honestly, if your database is over two to three years old and enjoys any amount of complexity, budget scope, and dependency across your organization, then I doubt you’ll ever be able to decouple it. I see folks creating these new data lakes, which act as reservoirs for any and all types of data gathered and generated across operations. These lakes provide valuable opportunities for API innovators to potentially develop new and interesting ways of putting data to work if they possess an API layer. However, I still think the massive data warehouse and database will look to consume and integrated anything structured and meaningful that evolves on the shores. Industrial-grade data operations will just industrialize any smaller utilities that emerge along the fringes of large organizations. Power structures have long developed around central data stores, and no amount of decoupling, decentralizing, or blockchaining will change this anytime soon. You can see this with the cloud, which was meant to disrupt this when it just moved it from your data center to the someone else’s and allowed it to grow at a faster rate.
I feel like us API folks have been granted ODBC and JDBC leases for our API plantations, but rarely will we ever decouple ourselves from the mother ship. No matter what the technology whispers in our ears about what is possible, the business value and political control over established databases will always dictate what is possible and what is not possible. I feel like this is one reason all the big database platforms have waited so long to provide native API features, and why next-generation data streaming solutions rarely have simple, intuitive API layers. I think we will continue to see the tractor beam of database culture continue to be aggressive (as well as passive-aggressive) to anything API, trumping access possibilities brought to the table by APIs with outdated power and control beliefs rooted in how we store and control our data. These folks rarely understand that they can be just as controlling and greedy with APIs, but they seem to be unable to get over the promises of access that APIs afford and refuse to play along at all when it comes to turning down the volume on the tractor beam so anything can flourish.
Published at DZone with permission of Kin Lane , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.