Big Data 2019 Predictions (Part 3)
Big Data 2019 Predictions (Part 3)
We anticipate more real-time decision making and recommendations to improve customer experience.
Join the DZone community and get the full member experience.Join For Free
Given the speed with which technology is changing, we thought it would be interesting to ask IT executives for their 2019 predictions. Here's more of what they told us about big data:
Because of the significant shortage of data science experts, we’ll see a surge in adoption of easy-to-use analytic applications. This will enable data scientists to productionize data analytics by making applications more broadly accessible, interactive, and ultimately usable by business analysts, IT and others in the organization. Specific to ModelOps (DevOps for Data Science), in 2019 “continuous integration and delivery” platforms will help simplify the complexity of tools, languages, and platforms-- from big data to advanced analytics.
We are beginning to see a shift in intent around data from organizations large and small. The focus has shifted to how organizations can best use the data they’ve collected, and to a lesser extent, how they can protect and secure that data in an era of increasing regulation and consumer sensitivity toward data collection. They want to know how to speed up time to insight, how to collect the right data (as opposed to just collecting all data, all the time), and how to operate more efficiently, both in the data department and as organizations in general.
Specific key trends that inform this change in vision:
- In-memory solutions to speed processing. Rather than depend on slow-spinning metal, these products create terabytes of tables and relationships in fast RAM to speed computation and retrieval. This is also beneficial for making decisions at velocity/speed (paired with edge computing), as the platform itself can actually keep up.
- Predictive analytics. Harnessing the continually evolving power of machine learning, predictive analytics can help organizations forecast failure, change, and bottlenecks in all manner of applications from products to the factory floor. This is where most organizations will find cost savings and margin improvements and they are eager to expand their practice in this area, which will necessarily also involve investments in data pipelines and real-time, or streaming, data.
- Intelligent security. This involves treating security logs and edge device sensor data to the standard big data analysis rigamarole in order to derive insights about network attacks and penetrations. This applies both to an organization’s data lake/fabric as well as to an organization’s network perimeter in general. The hackers aren’t going away, and they like any target.
- Edge computing. Moving processing closer to the sensors at the edge of the network helps decrease bandwidth needs, storage costs for raw data, and processing power in a centralized data lake/fabric situation. Fast action on insights is possible as well, especially when that action involves changing some environment variable close to the sensor quickly and easily.
- Visualization, aka “self-service business intelligence.” Removing the need for expensive data scientists to be the sole interface between a data lake and the end business users, self-service visualization and analytics tools will continue to grow in capability and popularity. Data scientists will still be in high demand, of course, but democratizing access to data across an organization is a key trend to watch, and one to harness as well.
ERP and CRM vendors alike will increasingly leverage true user behavior data in their push for real-life business applications infused with Machine Learning and AI capabilities. In the ERP space specifically, the focus will be the automation of repetitive business processes. Within the next 5 years, we expect over half of back-end processes to be fully automated, and user analytics will be a key ingredient for intelligent automation.
In the CRM space, the ML/AI algorithms will be augmented with real-time user data, as a way to make applications more responsive to human interactions. We are also predicting a similar trend in the APM space, with more and more APM vendors embedding automated and predictive capabilities into their solutions, in conjunction with augmenting their traditional technical KPIs and SLAs with user-centric metrics, to gauge true user experience and business impact of IT solutions.
Self-service is going to be the name of the game. Business users and data scientists are not going to be content with waiting for constrained resources to provide access and analytics.
Data Localization Laws will make IT exponentially harder. Data localization laws, like those that we’re seeing in China, Russia, and now in the EU under the GDPR, will make IT significantly more challenging than in the past. As a result, companies will have to abandon smaller, strictly regulated markets as they strive to understand the technology needs and requirements to support these new regulations. Many will also have to turn to experts to address the complexities of regulations and distributed customer data as the volume and geographic distribution of data continues to expand.
I believe the focus in Big Data and Analytics in 2019 moves from building large data lakes with Petabytes of data to getting true value out of those immense data lakes. Enterprises and governments will look at leveraging their multi-million dollar investments in Hadoop Data Lakes with customer data and drive business outcomes, such as increase cross-sell/up-sell, improve marketing campaign efficacy, reduce costs and also better manage risks.
Recent years have seen significant progress on tools to make data analytics more real-time and more automatic. In 2019 I expect a trend towards improved predictive models that provide deeper insight into your data so you can ask not only what but also why and who." --
In 2018, data security claimed a place at the boardroom table, cementing its position as a major concern for organizations at all levels. As we head into 2019, it will continue to draw attention. Ensuring data security is not just good business practice, it is crucial for companies to survive. Fines associated with data breaches and the General Data Protection Act (GDPR) are approximately four percent of revenue – a high enough price that will make non-compliance cost-prohibitive. We can expect to see many countries and even the technology community create additional data protection policies either on a local or global level to safeguard against data loss.
Organizations are hitting the terabyte triangle in their Big Data operations: Organizations need the ability to scale with data growth, manage real-time data, and see that real-time data within the context of historical data. Most solutions can only hit two points in the triangle. At the multi-terabyte (TB) level, you can use Kafka and Spark to get real-time and scale; with Hadoop and Vertica, you can have historical and scale; and with current log and specialized tools like Elastic stack, Splunk and Cloudera, you can have historical and some real time. But you can’t have all three with any of those approaches. What you need is a single solution that lets you break through the terabyte triangle. This is the emerging problem at the largest companies, but smaller enterprises are experiencing huge data growth and are going to catch up with the big guys fast.
The data growth train is heading for a wreck: We’re seeing many Fortune 1000 and 2000 companies sacrifice insight from their data operations because current solutions can’t meet their needs. These enterprises aren’t getting the insight they need from data, and as a consequence are missing business opportunities, while also exposing their organizations to the risk of security breaches. Some companies look at 20-40% of their data. They’re letting 60-80% of it drop on the floor because they can’t handle the volume, the number of new data sources, or the speed at which incoming data is growing.
Many organizations now claim to be data-driven, taking (often fully automated) decisions based on data rather than on figures in reports. The next evolution will be towards data-centricity. This is where an absolute commitment to high-quality centralized data forms the core of business operations that tools are built around, rather than the current status quo of building tools that act upon the organizational data silos. It becomes less about collecting and hoarding data – the big data mentality – and more about acting on data intelligently. This evolution will be accelerated because of GDPR, which is underpinning a lot of the latest thinking with how businesses use data.
Customer service will become progressively more user profile and data-driven (the way marketing has). Marketing communications service providers have established deep capabilities and proven results in the last decade, in part by optimizing their communications strategies around finely honed sophisticated user profile segmentation. As a logical extension to context awareness, leveraging and maximizing the available data about each customer can offer real differentiation opportunities for the modern contact center. By drawing insights from basic profile metrics like lifetime customer value (LCV), evaluating core customer historical data, channel and more advanced preferences like voice or personality of the support entity, it’s possible to elevate support experiences, strengthen brand appreciation, increase LCV, and help drive positive organic social media reach and impact.
Demand for smart analytical applications redefine Enterprise data practices. Enterprises are in a race to become data-powered businesses yet only a small fraction of the value of advanced analytics has been unlocked. In 2019 there will be high-demand for new innovations around smart analytical applications that are driven by real-time interactions, embedded analytics, and AI. The business requirements for these applications will be a forcing function for enterprise data teams to evolve their traditional big data architecture and implement a new active analytics tier for building intelligent, data-powered applications for real-time decisioning.
The combination of Whole Foods and Amazon will combine data sources that will highly target and profile high-income consumers, resulting in new products and product categories.
The growth of data visualization will be one of the most salient big data trends in 2019. As data becomes increasingly complex, visuals such as dynamic graphics, maps, charts and more make data digestible. Visualizations help to turn data into meaningful information. In turn, this allows marketers to more effectively gain insight and more quickly make smarter marketing decisions because we can focus on what is important and not get distracted by the noise. The way in which you package and interpret your data is one component of moving from being a data-distressed organization to being a truly intelligence-driven leader.
Adoption for Hadoop will dwindle and its clusters growth will grind to a halt. Also, expect to see a new logos for Hadoop.
The governments around the world will demand increased transparency from companies who have amassed large amounts of PII data. Big data platform fueled the rise of multiple mega-corps, now the pendulum has swung the other way and society needs to ensure the platform is not abused by adverse nations or rogue individuals.
Throughout the past few years, we’ve seen a consistent influx of data, but in 2018, the quality of that data was finally enhanced. As data volume and variety continues to increase in 2019, data ingestion and analysis will transform to just-in-time and real-time streams, utilizing the power of machine learning. This will create new opportunities for application vendors creating embedded applications to report on streaming data while also offering businesses using predictive models the next set of actions to take in real time. This shift will help organizations process data faster and more efficiently.
As big data continues to proliferate, there will be an increasing need for technology that enables personal and contextual access. While technology continues to drive the creation of big data in 2019 and beyond, new innovations will increasingly help people and organizations leverage big data to enable users to make better-informed decisions. An area to keep an eye on next year is also the increasing focus on privacy around big data. GDPR and the California Consumer Privacy Act are just the beginning, and I expect to see more privacy regulation discussions next year.
2019 will be the year companies who lead successful change initiatives will do so by implementing omnichannel analytics from the contact center to get to the root of what customers want.
Companies know they must adapt to change in order to grow and stay relevant, but they aren’t relying on enough data to inform change. In 2018, we discovered a discrepancy between the value executives place on analytics and the extent to which data is actually analyzed. We surveyed 1,000 executives and found while nearly all agree data and analytics are integral to informing sales and marketing changes, more than half of them currently rely on only one data point—such as revenue figures or social media interactions—to inform decisions. They aren’t getting to voice-of-the-customer data to understand what their customers want.
In 2019 this will change. ICD estimates enterprises will spend in excess of $2 trillion in 2019 on digital transformations. Companies who lead successful change initiatives will do so by implementing omnichannel analytics from the contact center to get to the root of what customers want. Improvement of data integration and reporting across the business will give the C-suite easier access to data and the knowledge needed to fully optimize customer experiences.
Opinions expressed by DZone contributors are their own.