Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

DZone Research: Database Futures

DZone's Guide to

DZone Research: Database Futures

Let's look at what IT executives said about how their clients are using databases today and how they see solutions changing in the future.

· Database Zone ·
Free Resource

Compliant Database DevOps: Deliver software faster while keeping your data safe. This new whitepaper guides you through 4 key ways Database DevOps supports your data protection strategy. Read free now

To gather insights on the current and future state of the database ecosystem, we talked to IT executives from 22 companies about how their clients are using databases today and how they see use and solutions changing in the future.

We asked them, "Where do you think the biggest opportunities are in the evolution of databases?" Here's what they told us:

Integration

  • Seeing the beginning of the integration of JSON, Python, and R. Make it easy to use within the database itself. Integrating multiple technologies. How do database engines provide fabric to stitch the technologies together? See aspects with Athena versus Spark or S3. New analytics and processing customers want to use. 
  • Integration with Spark processing to leverage the information contained in DB. Integration of search indices. The importance of SQL. Multiple integration points to leverage the processing from many angles and perspectives. 
  • Graph continues to grow. Integration will become more important it’s not possible to create a single technology to address all problems. You need specialization in your database. The different kinds of requirements for data leads to tensions in database architectures for solving different kinds of problems. 
  • Do believe purpose build database offer better efficiencies, performance, cost effectiveness, better integration between various databases. Glue, EMR, and data lake service help with this.
  • Fewer choices. The conjoining of more commonality between the data, the use case, and the different use cases. Able to use data across several models. Don’t require movement of data across data stores. Less complexity. 
  • Organizations are struggling to keep up with point solutions overwhelming their applications and businesses. They’re searching for databases that can meet a wider range of demands, and the platforms that satisfy these requirements will be well sought after. As organizations increasingly transition to digital technologies, more and more data is generated by new infrastructures, devices, and applications. The database plays a heavy role in supporting the growth of these digital transformations and in handling the massive amounts of data that move through an organization. Traditional databases aren’t well-suited to meet today’s needs across digital technologies, and we’ll see databases continue to evolve to be able to manage and maintain the myriad of interactions that take place throughout the customer journey. There are tremendous opportunities for new databases built today; in fact, there is a rejuvenation of the database industry in the last few years and exciting new innovation on distributed databases are being brought to market now.

Real-Time Analytics

  • 1) A lot of low-hanging fruit that does not need a complex solution — AI/ML. Building something that is usable. Don’t lose data. PostgreSQL is reliable from day one and has JSON support. 2) All data is time series data. Building a purpose-built, proven time-series database. Databases need years to be battle hardened. 
  • As the amount of data keeps growing the ability to have operational and analytical capabilities in a single system are critical. Able to run in real-time. The idea of self-learning trends and analytics and understanding what’s going on in the database is critical. There will be more intelligence in the database in the next few years. We’ll have real-time BI and trends versus manual. Help customers and understand and analyze faster. Do not reeducate if you don’t need to. 
  • Data migrations and transformations, more technology around data mining. Technologies get better. Queries to get data out will be faster from relational and non-relational databases. 
  • The proliferation of purpose-built databases is great for us and customers. Oracle wasn’t built to do everything. Purpose built on cloud platforms to use what you need, provisioning a data set for analytics use case is having a profound effect to do more modern analytics. ML needs data assembled in a certain way to work. The cloud provides an excellent environment to do this. 
  • The space has been evolving quite a bit. We’re seeing the complexity of analysis is increasing. Compute problem still stands. Nvidia has standardized the GPU we’ll see more in that area. Bringing AI and models to the data rather than extracting data will change. Productizing AI. Databases taking ML into production and deploying n large scale. 
  • 1) In-memory technology — as memory gets cheaper and most modern apps demand low latency, in-memory technology will be increasingly important in the future. 2) IoT — As sensor data starts coming in from a more connected world at a faster pace than ever before (enabled by 5G), databases are going to need to instantly analyze and act on sensor data. 3) 5G — is set to become a reality in the next few years, and with it will come a whole new world for tech. The network speeds and opportunity that 5G brings is unlike anything database technology has ever seen before. 4) Operationalize Machine Learning/AI — Machine learning models are built with considerable time and expense, but by not implementing them in production organizations are missing out on deriving maximum value out of the models.  
  • It’s probably the increased desire and appetite for finding different ways to look at and analyze data. Whether it’s machine learning with Splunk, finding relationships between data, or the extremely fast searches provided by Elasticsearch, new kinds of databases are emerging that enable companies to extract more value from their data. Importantly, however, this is all part of the polyglot environments mentioned earlier that are now becoming the norm. New entrants like this aren’t taking the place of existing databases — they’re adding to them. Oracle, MySQL, and SQL Server have remained firmly at the top of the DB-Engines Ranking table for the last five years, for example, even as the database market becomes more crowded. The point here is that as databases evolve and are introduced to store, analyze and use data in different ways, they add to the existing market rather than supersede it.

Other

  • There will probably be the second generation of NoSQL databases that will happen. One or two may find a place. At the same time, the open source project they’re trying to compete with on performance will catch up. The move to managed services at the enterprise level will continue, particularly on the data side. More and more we will see managed services of whatever flavor and provider.
  • 1) Some consolidation of the query language. We are similar to SQL and other generic procedural languages with built-in parallelism. 2) Some weeding out of who can handle massive data sizes with sufficiently fast answers. 3) Having sufficient flexibility in architecture so the customer is not trapped in a particular system enabling migration and growth.
  • Containers are revolutionizing how engineers develop and deploy applications. We believe in the future developers will want to use a container orchestration framework such as Kubernetes to manage all the components of their stack, including their databases. Kubernetes would allow containerized databases to be self-healing and easier to scale. The support for stateful services such as databases in Kubernetes is in its adolescence, but huge progress is being made.
  • The enterprise hasn’t yet fully recognized the true importance of data in databases within the context of DevOps methodology. Technologies can bring speed and automation to data in databases, in the same way, speed and automation have been brought to other elements. For example, we now have the ability to version control database data, in much the same way that developers version control code. And when developers and testers get self-service control of data in databases, it unlocks massive productivity and innovation speeds-up.
  • There is a big opportunity to move computing closer to the data. As data volume grows, data-intensive applications are moving increasing amounts of data around, which is inefficient and also error-prone. Parallel distributed patterns (such as MapReduce, Dryad, DataFlow, Flink, and many more) have taken off precisely because they work close to the data. But most databases are stuck in the mindset of being a layer in an N-tier architecture. There’s a big opportunity to bring computing into the data fabric within databases while keeping things easy for developers. Another big opportunity is flexible data models. We’re seeing an increasing variety of database types because stuffing the wide variety of data patterns into relational tables just runs out of steam as you scale. You have to either organize your data in a way that is not natural or use a variety of different specialized databases and pick the one that fits your data or algorithm the best. But the same data can be used in many different ways. There’s an opportunity to use the same data organized in multiple different ways — looking like relational tables to the SQL developer while at the same time looking like graphs to the Gremlin developer and a document store to the JavaScript developer.
  • Time-Series is the fastest growing database engine segment as reported by DB Engines. As a leader in the time-series database segment continues to drive innovation and meet customer needs to provide solutions for both on-premise and Cloud needs.

Here’s who we talked to:

  • Jim Manias, Vice President, Advanced Systems Concepts, Inc.
  • Tony Petrossian, Director, Engineering, Amazon Web Services
  • Dan Potter, V.P. Product Management and Marketing, Attunity
  • Ravi Mayuram, SVP of Engineering and CTO, Couchbase
  • Patrick McFadin, V.P. Developer Relations, DataStax
  • Sanjay Challa, Senior Product Marketing Manager, Datical
  • Matthew Yeh, Director of Product Marketing, Delphix
  • OJ Ngo, CTO, DH2i
  • Navdeep Sidhu, Head of Product Marketing, InfluxData
  • Ben Bromhead, CTO and Co-founder, Instaclustr
  • Jeff Fried, Director of Product Management, InterSystems
  • Dipti Borkar, Vice President Product Marketing, Kinetica
  • Jack Norris, V.P. Data and Applications, MapR
  • Will Shulman, CEO, mLab
  • Philip Rathle, V.P. of Products, Neo4j
  • Ariff Kasam, V.P. Products, and Josh Verrill, CMO, NuoDB
  • Simon Galbraith, CEO and Co-founder, Redgate Software
  • David Leichner, CMO and Arnon Shimoni, Product Marketing Manager, SQream
  • Todd Blashka, COO and Victor Lee, Director of Product Management, TigerGraph
  • Mike Freedman, CTO and Co-founder, and Ajay Kulkarni, CEO and Co-founder, TimescaleDB
  • Chai Bhat, Director of Product Marketing, VoltDB
  • Neil Barton, CTO, WhereScape
  • Read this new Compliant Database DevOps whitepaper now and see how Database DevOps complements data privacy and protection without sacrificing development efficiency. Download free.

    Topics:
    database ,research and analysis ,integration ,real-time analytics

    Opinions expressed by DZone contributors are their own.

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}