Database 2018 Surprises and 2019 Predictions
Cloud databases continue growth and adoption.
Join the DZone community and get the full member experience.Join For Free
Given the speed with which technology changes, we thought it would be interesting to ask IT executives to share their thoughts on the biggest surprises in 2018 and their predictions for 2019. Here's what they told us about databases:
Databases contain organizations’ most sensitive data, but companies are still leaving their databases exposed. Given the number and frequency of data breaches, it’s surprising that companies continue to assume their systems are secure and don’t bother with encryption or other data-centric security.
MongoDB and Elastic hitting $4B in market cap in the public markets and Snowflake hitting roughly that same valuation in the private markets. This strongly validates the need for specialized databases that take on discrete valuable parts of the data problem: an excellent indicator that we are going to be seeing successful, large companies built around this premise.
Microsoft fully embraced big data clusters by incorporating SPARK and the Hadoop Distributed File System to solve the need for large storage clusters. Microsoft also extended Windows 10 to run a Linux subsystem as they continued to support more and more Linux platform functionality. In addition, the big players like Oracle and SQL Server enhanced their autonomous database offerings as cloud adoption continues to ramp up. And perhaps most exciting, the innovation found in NoSQL databases is becoming increasingly absorbed into mainstream relational databases making them more relevant again.
The exponential growth of cloud databases. Cost and inertia were major drivers – why break something that is working. Organizations must be more confident and secure with their existing cloud-delivered services.
Big data is dead. Data is a business driver and so companies amass as much as they can, putting it in as many systems as they can. Despite new regulations (like GDPR and the CA Consumer Privacy Act), companies won’t take an overwhelmingly different approach to securing their databases in 2019.
Given recent shifts in how applications are deployed and integrated across the organization, the lines of business will find themselves increasingly empowered to make significant decisions around IT spend and how IT is architected to support specific business needs. I predict the lines of business will either choose SaaS-based IT Stacks, or they will develop their own solutions through shadow IT such as Citizen Data Scientists. This empowerment will enable LOB decision makers to explore their imaginations, consider creative alternatives, and find new ways to drive greater business value. At the same time, this will represent a fundamental shift in the role IT plays within the organization. A few factors driving this prediction include the universal adoption of SaaS and cloud-based applications that can integrate with on-premises databases, decreasing barriers to new task-specific ISVs which can directly impact LOB productivity by addressing specific pain points, and the rise of DevOps.
Some people will start to realize that big RAM-- which in recent years has grown cheaper and bigger faster than the data it's called to process-- can offer significantly better TCO and real-time performance for certain kinds of complex data problems than the commodity scale-out model that's become popular in recent years.
Because Kubernetes and Apache Cassandra are both incredibly popular technologies, developers are increasingly looking to use these solutions in tandem - a trend that will rise in 2019. The tricky part has been that while Cassandra databases on Kubernetes are relatively simple to start with, custom scripts (or the use of a specially-designed operator) is required to overcome limitations in Kubernetes’ understanding of the database. Expect Database-as-a-Service (DBaaS) providers to put their expertise toward making it demonstrably easier for developers to unlock the full advantages of Kubernetes’ container orchestration and popular high-availability, high-scalability databases like Cassandra.
2018 showed a significant shift in the database market as NoSQL databases such as MongoDB became the new darling of the development world. 2019 will show a stabilization of the market as the shine comes off the new kids on the block and developers once again focus on the best tool for the job.
There will be a growing interest in using persistent and serverless query environments rather than transient environments. As a result, newer SQL engines like Amazon Athena, Google BigQuery, Cloudera Impala, and Apache Presto will continue to see rising adoption. Despite popular myths, these newer SQL engines show they can offer the same reliability, scalability, and performance of transient execution environments.
In 2019, companies will increasingly rely on high-performing in-memory multi-model databases that can support all data types and use cases, especially as adoption of microservices increases. Taking a microservices approach can offer a slew of benefits, but only if companies leverage appropriate databases so storing, managing and sharing data across services is near zero latency.
The role of the database administrator is changing. As autonomous databases, AI/machine learning and data analytics become more prevalent, DBAs will need to educate themselves on these technologies to remain relevant. Spending on analytics tools and personnel will grow at a greater rate as the adoption of big data in organizations continues. In addition, AI and the explosion of data in multiple database platforms will require increased investment in data governance within the enterprise. We will also continue to see adoption of cloud-based NoSQL, graph and analytics database platforms to support the increased adoption of AI tools and machine learning by organizations.
Cloud databases (SQL, IDS) will experience massive growth, both database vendors and those offered by the cloud providers.
Influx will become the dominant time series database used for both service metrics and IoT, especially in AWS.
Oracle’s defections to scale-out SQL will reach the point where the company includes a risk factor in their quarterly disclosures. Additionally, we will see massive growth on cloud-based SQL data platforms.
A change in databases might make an appearance in the new year, making the lives of DevOps engineers easier. Thanks to the creation of auto-recommendation and auto-tuning features, perhaps powered by machine learning systems, these automated tuning features will remove the challenges of capacity planning and performance optimization that plague DevOps engineers today.
The big challenge for backup vendors is how to minimize the impact frequent backups can have on production. To solve this, the absolute fundamentals of backup are going to have to change. Organizations want real convergence of solutions with fast recovery and more granularity without the negative impact on compute or network bandwidth. In 2019, whether businesses want to roll back seven seconds, or seven days, their backup system should let them do this quickly and easily — minimizing the disruption of data loss from any cause.
In 2018 we saw hardware vendors trying to converge the software layer into their product offering, but all they’ve really created is a new era of vendor lock-in — a hyper-lock-in in many ways. In 2019 organizations will rethink what converged solutions mean. As IT professionals increasingly look for out-of-the-box ready solutions to simplify operations, we’ll see technology vendors work together to bring more vendor-agnostic, comprehensive converged systems to market.
This year we have seen people gravitate towards high capacity storage, which is being fueled by the growing volume and complexity of data. 2018 was full of the challenges that having a massive media library comes with, but with the upcoming implementation of high density and scalable storage in 2019, these challenges will be significantly reduced. One of the main struggles we have seen is the difficulty of storing data in different locations, but with automation tools improving and becoming more accessible, users will be able to make decisions about where their data is stored and for how long. In 2019, we are most likely to see organizations taking advantage of this in a hybrid cloud model, creating the perfect IT balance.
The database world is rapidly moving towards a database-platform-as-a-service or “dbPaaS” model in which databases are consumed as a service from cloud providers. I anticipate this trend to increasingly also apply to in-memory computing solutions. In-memory-computing-platform-as-a-service or imcPaaS solutions will enable companies to easily consume in-memory computing platforms as PaaS solutions on major cloud services such as AWS, Microsoft Azure, Oracle Cloud, Huawei Cloud and more. We already see leading companies across a range of industries from financial services to online business services to transportation and logistics deploying the GridGain in-memory computing platform on private and public clouds for large-scale, mission-critical use cases. In-memory computing vendors are already making their products available as dbPaaS or imcPaaS solutions and I predict those solutions will increase in functionality add new services at an increasing rate in 2019.
Enterprise business units want agility from their data warehouses so that they can answer business questions at a very high velocity. IT teams have petabytes of data on their on-premise clusters with the ability to bring up thousands of containers with minimal admin costs. Business users will use this cloud-native infrastructure to build self-service, transient, and short-lived data applications. With new technologies that allow us to share data context across multiple clouds, we’ll see organizations move seamlessly between private cloud and public clouds. With this shared data, we expect a blurring of data residing between public and private cloud.
Database sprawl will continue as different types of databases proliferate. App developers are creating a lot of data in a lot of different ways, but it’s all bumping into each other without a servicized solution that offers flexibility to house and manage this data. As it stands, developers are using multiple databases for each individual application, creating a database sprawl as users cobble together multiple databases to plug different holes in the system. While the short-term gain of being able to use emerging technologies and have many choices seems great upfront, companies need to consider their long-term goals, rather than select a cobbled together, quick solution.
Opinions expressed by DZone contributors are their own.