DZone Research: The Changing Database Ecosystem

DZone 's Guide to

DZone Research: The Changing Database Ecosystem

The biggest changes in the database ecosystem have been the cloud and the proliferation of databases to handle very specific business needs.

· Database Zone ·
Free Resource

To gather insights on the current and future state of the database ecosystem, we talked to IT executives from 22 companies about how their clients are using databases today and how they see use and solutions changing in the future.

We asked them, "How have databases changed in the past year or two?" Here's what they told us:


  • 1) Maturation of the non-relational data store. War by big cloud vendors on scale and capabilities of their database platforms. Scalability, performance. Integrate with the cloud vendors. As people move to the cloud to take advantage of platforms in the cloud the problem becomes worse and there is a greater need for an automated versus a manual process to manage. Able to support many different database platforms. The polyglot environment. Support a multitude of platforms. More releases across more database types. Allow database professionals to scale across the cloud. More through the cloud, easier for developers to spin up their own database with PaaS from companies like Pivotal. Easier to get data for testing. All environments that have a database schema that needs to be managed. 
  • Changes are being impacted by digital transformation. Cloud is part of that. Now looking at containers to realize. Deploy databases within a container platform. Well suited to meet this need — container native databases. More agile, faster, scale-out need container native databases for ease of deployment, use, configuration, and installation. 
  • The realization that cloud is the endpoint. Downloading a database is going the way of the buggy whip. Developers want to use a database they don’t want to run it. 
  • We’ve seen more and more companies migrating to the cloud, and therefore tasked with onboarding their databases in the cloud. This is often a long and complex process, which is why we have focused on helping customers migrate faster, with less risk by packaging data into lightweight data pods, decoupling data from legacy infrastructure and eliminating environment constraints that slow migrations. Data can be masked on premises before it’s migrated to a private or public cloud, synchronized to any point in time to test dependencies across applications, and quickly recovered in the event of errors. 
  • There have been two big developments. 1) The first is the need for companies to comply with new data protection legislation, which has made many companies change the way they handle data, as covered in question #3. 2) The second is the rise of the cloud. The recent public preview of Azure SQL Database Managed Instance, for example, marks a significant step-change in Microsoft’s managed database service. It’s important because it elevates the simplified database-scoped programming model used in the first two flavors of Azure SQL Database, Single, and Elastic Pool, to the instance level. Managed Instance shifts the balance to the cloud by providing near 100% feature compatibility with on-premises SQL Server, yet also offering the benefits of a cloud service like built-in high availability, automated patching, dynamic scalability, and backup management with point-in-time recovery. Companies can migrate existing on-premises SQL Server workloads to the cloud and retain the features they’re accustomed to, while also gaining many of the manageability benefits of a PaaS offering. To remain in step, companies like Redgate need to offer full support for cloud services in the tools that have been traditionally used for on-premises databases. That way, migrating to the cloud is easier because the same processes and working practices can be retained, wherever the data resides. 
  • The most visible changes are continuations of trends that are a decade old, specifically cloud, open source, and NoSQL. The most interesting changes to me, though, are the increase in unified data stores (such as using relational databases instead of Hadoop for data lakes or using multi-model databases instead of multiple specialty data stores), and the movement of processing closer to the data (such as embedded data science or AI facilities).


  • We're seeing more database platforms making it easier to work with JSON. Database platforms are incorporating Python. Elastic scalability and separation of scale and storage. Particularly in the analytics. Developers can start a small project and grow it. Have object stores appear for access. 
  • NoSQL back to SQL. Relational from SQL query language. SQL is definitely making a comeback. Need a common language as data gets more fragmented. SQL is the lingua franca for data analysis. This has led to the resurgence of relational databases like PostgreSQL. People are running from Oracle. Need relational databases for time series data. Another trend is the endless march toward more real-time systems. Operationalizing data for real-time systems. Edge is part of this. Make better decisions faster. Movement from data warehouses and data lakes to databases. Better data is time series data. 
  • Used to just talk about relational data. Now documentation, Bitmaps, NoSQL, non-relational data. The explosion of social media creating more non-relational data.
  • The proliferation of different databases in particular NoSQL. If moving to Azure or AWS, there are NoSQL databases that go with them like MariaDB.
  • Growth in the introduction of graph products and users wanting to use it. A lot of new customers you wouldn’t expect to see in the past as a specialty part vendor. People see needs with internal operations or commerce, everyone has a need for recommendations. People want to understand their business better.
  • Graph databases continue to be the fastest growing category in databases. Conversations with executives looking at graph databases to solve a wider variety of important problems. 
  • Moving from lots of little applications to the massive application in all industries.  A car company used to have a few thousand supplies and 10,000 dealers. It did not directly engage with the end customer. Today there’s an app for the car on owners' phones. Now car companies have to engage with the consumer base hundreds of millions of times. Now Ford looks like Netflix. This holds true for every industry with apps for tractors, hotels, etc. Massive apps to directly engage the customer. Very different data needs. This is why monolithic is no longer working for apps.  2) The diversity of data types from tables to graphs. There's a variety of databases to fit the data types and processes to be most efficient. Purpose built databases.


  • Multi-model is able to do different types of processing in the same database. The emergence of a document database because it encompasses flexibility and responds rapidly to changing situations. The document can evolve. Not pausing to renegotiate a data model. Adapt to changing environments on the fly in real time. 
  • Starting to see more activity around multi-model data stores. More talk than adoption. Seeing companies relooking at core architectural decisions around NoSQL implementations. Technologies are being adopted to improve server performance. Cassandra has a number of providers — Yugabyte, Native C, source project. Starting to see the adoption of streaming platforms like the Kafka space. Usefulness of a service bus or a streaming architecture. People are seeing benefits and deployment alongside a lot of different databases.
  • Looking at the GPU world we started, Brightlight, PG Strong, interesting innovation. IBM has added GPU processing to their database. Other data processing platforms like Spark for GPU acceleration. Powerful high throughput device in a data center makes sense when answering complex questions. Processing gets more complex, you need something more powerful. Traditionally distributed with Teradata and Oracle with Exadata but the GPU has allowed this to be done internally in one node and scale out when needed. In the serverless world, you will load a file do analysis and then dump. 
  • Recently both relational and document databases have started to emulate the best features of each other. For example, Postgres has put a lot of work into its JSON data types to compete with document databases like MongoDB. Likewise, MongoDB 4.0 now includes multi-document transactions and ACID data integrity guarantees, which will make it much more attractive to developers familiar with relational databases such as Postgres or MySQL.
  • Seeing more solutions trying to support AI capabilities ML/DL is adjacent. Running ML models on data. Will start converging running models on larger datasets. Also more dashboarding and BI being consolidated in the database world.
  • 1) The market (enterprise end users and even consumers) are demanding low latency apps. Latency can result in not just loss of revenue, but also the loss of market share to competition, customer loyalty, and satisfaction. Databases have adapted to deliver low latency. 2) Data is streaming in at higher velocity than ever before. Databases are now ingesting this fast data and processing it at a very high throughput in milliseconds. 3) Business logic (machine learning) is operationalized. Machine learning models are expensive to build and maintain — merely using them to analyze past performance is simply not enough. These models are being deployed in-database for real-time analysis and to drive desired business outcomes.
  • In the last 7-8 years when the NoSQL movement started, there was a vast divide in what traditional database systems offered and what NoSQL databases offered. While the new databases solved the scale and performance problems, they were not mature in their industrial strength or were not enterprise grade. These issues are being addressed, and more and more business-critical data is now sitting in NoSQL systems. They are also getting battle-tested under production workloads across every industry imaginable. This has made our engagement database that much more robust and dependable for developers to stand increasingly more complex applications and deliver significant value to their customers.

Here’s who we talked to:

  • Jim Manias, Vice President, Advanced Systems Concepts, Inc.
  • Tony Petrossian, Director, Engineering, Amazon Web Services
  • Dan Potter, V.P. Product Management and Marketing, Attunity
  • Ravi Mayuram, SVP of Engineering and CTO, Couchbase
  • Patrick McFadin, V.P. Developer Relations, DataStax
  • Sanjay Challa, Senior Product Marketing Manager, Datical
  • Matthew Yeh, Director of Product Marketing, Delphix
  • OJ Ngo, CTO, DH2i
  • Navdeep Sidhu, Head of Product Marketing, InfluxData
  • Ben Bromhead, CTO and Co-founder, Instaclustr
  • Jeff Fried, Director of Product Management, InterSystems
  • Dipti Borkar, Vice President Product Marketing, Kinetica
  • Jack Norris, V.P. Data and Applications, MapR
  • Will Shulman, CEO, mLab
  • Philip Rathle, V.P. of Products, Neo4j
  • Ariff Kasam, V.P. Products, and Josh Verrill, CMO, NuoDB
  • Simon Galbraith, CEO and Co-founder, Redgate Software
  • David Leichner, CMO and Arnon Shimoni, Product Marketing Manager, SQream
  • Todd Blashka, COO and Victor Lee, Director of Product Management, TigerGraph
  • Mike Freedman, CTO and Co-founder, and Ajay Kulkarni, CEO and Co-founder, TimescaleDB
  • Chai Bhat, Director of Product Marketing, VoltDB
  • Neil Barton, CTO, WhereScape
  • Topics:
    cloud, database, database ecosystem, sql

    Opinions expressed by DZone contributors are their own.

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}