AI/ML 2019 Predictions (Part 3)
AI/ML 2019 Predictions (Part 3)
Solutions and platforms are coming to market to democratize data science as more individuals and organizations are recognizing the value.
Join the DZone community and get the full member experience.Join For Free
Given the speed with which technology is changing, we thought it would be interesting to ask IT executives to share their predictions for 2019. Here are some additional predictions for artificial intelligence (AI), machine learning (ML), and the other subsectors of data science:
Overall, the hype cycle around AI will start to die down. We’ll see Pragmatic AI deployed successfully in select application areas, but Pure AI will remain elusive. I believe there will be a focus on operationalizing data science, and an increasingly open and collaborative system will be set in motion to share data science models through open marketplaces. This will require governance as machine learning frameworks across organizations will be used to continuously validate, test, and deploy high-quality data science models and services. In terms of hype, automation will continue to generate buzz throughout 2019, but people will come to realize that productionizing proprietary automation tools is not as easy as often promised.
ROI of AI. In recent years, organizations have been racing to implement AI into their business functions. As vendors have touted the possibilities, expectations were often extremely high for what this technology could achieve despite its infancy. After early exploration and investments with limited ROI to show, the hype and myths that surround AI are evaporating. 2019 will see businesses give themselves a reality check on AI and set more realistic and meaningful expectations for their return on investment.
While chatbots have become increasingly prevalent in customer service and internal processes, predictive analytics and machine learning amongst other technologies have also come to fall under the AI umbrella. As this definition expands, and AI becomes more closely aligned with augmented intelligence, 2019 will see businesses uncover and implement more AI use cases that bring tangible, measurable value to their organizations.
Data scientists will begin to recognize that relationships, networks, and communities are often much more predictive than individual features. As such, we will see more data scientists leveraging graph data structures and algorithms to build their models, as opposed to traditional tables and matrices.
2019 is the year that AI unlocks the tremendous value of productivity in the industrial world. More companies are coming to market with vertical solutions that require little know-how in training models or interpreting results. This focused approach can be used by anyone and enables very quick time-to-value at large scale. This shift will increase productivity and safety and will open the doors for new business models throughout the industry, like Outcome as a Service.
We are starting to see AI affect the real world, going beyond BI and consumer use cases. Now, the industrial world is leading innovation. As more machines get connected, artificial intelligence has a critical role to play in parsing through the flood of data.
The growing emphasis on AI and machine learning will make TensorFlow and H2O breakout technologies in 2019. In addition, Spark and Kafka will continue to see spiking popularity, as they did in 2018.
M&A transaction volumes will continue to accelerate as cloud business models mature at a rapid pace. Major acquisitions of rising artificial intelligence players will be made in order to provide highly in-demand and scare IP and talent in AI and ML. Google and Apple have led the way in acquiring budding AI technologies, and less innovative tech giants will try to mimic their success with purchases of their own.
Personalization. While B2B providers have been slow to adapt to the high standard of personalized digital experiences set by Amazon and Google, the industry has at least acknowledged the value of personalized home and landing pages. As customer expectations increase, enterprises will need to keep pace by using ML and AI to offer a personalized experience beyond the first impression, which extends to other assets such as technical documentation, community portals, and chatbots.
The explosive innovation in AI will continue in 2019 with areas like reinforcement learning and deep learning pushing the industry forward. Reinforcement learning is gaining traction as it improves an agent's response by using lots of data, oftentimes synthetic data, to make better decisions. We are seeing this in text/speech responses as well as in optimizing process-driven robotics. Deep learning at “the edge” is emerging as a cost and time-saving technique as well as a customer-driven necessity as AI is expected in all personal devices.
Along with those ML techniques, AI ethics and interpretable AI are issues that are challenging the industry. Teaching an agent how to be ethical is much easier said than done, and there’s the question of who is the arbiter of ethics? Interpretable AI is essential for industries such as finance or healthcare, how conclusions are made needs to be clear both for regulatory reasons and to gain an understanding of their impact. These are questions that are making the rounds and will have a significant impact on all of our futures.
In 2019, we’ll see the next wave of analytics innovation which will leverage AI and ML features to give enterprise users the timely, contextual insights they need. With the smart capabilities of augmented analytics, business users can generate high-quality insights in real-time, ultimately leading to actionable insights, which will drive real ROI and reduce poor decisions.
The development of AI and ML technologies in call centers and other customer-centric processes has given organizations, especially insurance companies, an opportunity to better mine and analyze data on a large scale and expedite the customer experience. In 2019, the insurance industry will need to leverage the power of AI and automate the customer experiences to assist users to interact faster and more efficiently – companies that not do not adopt AI automation risk losing customers if their experiences aren’t matching other experiences in their lives at home and work.
Next year we'll see a new step in maturity in the enterprise ML transformation as companies advance from proofs-of-concept to production capabilities. Enterprise ML adoption will continue as businesses look to automate pattern detection, prediction, and decision making to drive transformational efficiency improvement, competitive differentiation, and growth. We’ll see infrastructure and tooling evolve around efforts to streamline the process of building and deploying ML apps at enterprise scale, including the rise of cloud-native platforms to enable elastic auto-scaling and multi-cloud portability for end-to-end machine learning workflows.
AI and ML have been a lot like the weather: A lot of people talking about it, but no one doing anything about it. That’s going to change. As business people begin to recognize the potential of AI/ML, adoption is going to be on the uptick.
The last few years have seen AI models pushing the boundaries of understanding and generating language (most notably, news translation). I expect more Natural Language Processing (NLP) milestones to fall at an accelerated rate in 2019 due to the following factors:
- Language interpretation is context-dependent, meaning to truly make sense of one’s writing or speech requires knowledge of the participants, their history, and prior communications. Most NLP work has done language interpretation or generation without these factors, but I expect we will see NLP performance improve and become more personalized by incorporating more audience-aware knowledge.
- A dirty little secret about industrial-strength AI is that many of these systems are trained and evaluated on datasets created and labeled by thousands (or more) human raters. As we tackle more complex AI problems, the need for massive amounts of high-quality human judgments will increase, but there will be breakthroughs in leveraging machine learning techniques to make collecting those judgments more time- and cost-efficient.
- At the same time, methods which use minimal or even no labeled data (aka unsupervised techniques) will reduce our reliance on large swaths of labeled data, enabling deep learning models to be more robust on new and different types of problems.
- Advances in model architecture and infrastructure are enabling rich, deep learning models to work in lower-resource settings, such as on mobile phones and in web browsers. In the future, we expect to see more sophisticated models providing feedback to users in all settings, even without internet connectivity.
In 2019, I suspect continued rapid development in deep learning research on NLP, with interesting experiments on turning models trained in one domain to adapt to other domains through transfer learning applications on NLP. I also expect Google and Facebook to continue publishing on pre-trained embeddings, so they can build other modeling applications on top of existing ones. Additionally, I’m also looking forward to seeing more adaptation of Deep Learning methods from computer vision to NLP in 2019. These three things will impact QA systems, machine translation, text classification, sentiment analysis, and text summary to name a few.
I will be looking for a steady graduation from confined experiments and pilot implementations, to machine learning being a mainstay feature in systems across the enterprise. I hope to see an even sharper tack towards more functional, guided deep learning solutions for subject matter experts in fields such as financial services, life sciences, and public administration, planning and safety. This in contrast to having to rely on lower-level toolchains designed primarily for data scientists. As these things develop in 2019, regular practitioners like risk managers will benefit from richer insights without having to engage data scientists.
I am most excited about the rise of AI-powered human interfaces like AI-powered assistants, ML analyzed wearables and augmented reality. I expect to see a large increase in the amount of human-generated biometric data and am really excited to see how that data is going to be analyzed and used for predictive analytics, diagnostics, augmented living arrangements and information retrieval. Also with the recent advances in computer vision (with more to come in 2019), I am very excited to see how AR will start getting applied in various industries.
Deep Learning models have been shown to be vulnerable to imperceptible perturbations in data, which dupe models into making wrong predictions or classifications. With the growing reliance on large datasets, AI systems will need to guard against such attacks data, and the savviest advertisers will increasingly look into Adversarial ML techniques to train models to be robust against such attacks.
Hybrid and dedicated clouds will drive massive growth in machine learning (ML) projects. ML is poised for explosive growth over the next two years with an increasing number of projects moving into production by 2020, based on a recent survey of more than 344 technology and IT professionals. More than 80% of those surveyed confirm that they plan to use hybrid cloud for ML projects while keeping costs down. Univa customers are already asking for guidance with migrating their HPC and machine learning workloads to the cloud or hybrid environment, as they look to advance their ML projects into production.
AI/ML will be making its way into enterprise apps. We have been talking about AI being one of the hottest trends for the past two years. We are starting to see AI and machine learning steadily making its way into enterprise applications for tasks such as customer support, fraud analytics and business intelligence. There is every reason to believe that these innovations will continue to happen in the cloud, and 2019 will be a big year for AI in the enterprise.
HPC and GPUs will play a critical role in advancing machine learning projects. GPUs have found a great home in HPC, where many tasks like simulations, financial modeling and 3D rendering also run well in a parallel environment. According to Intersect 360, a market research firm that follows the HPC market, 34 of the 50 most popular HPC application packages offer GPU support, including all of the top 15 HPC apps. Therefore, GPUs are becoming essential in HPC. Scientists, enterprise researchers, universities and research institutes all know that speeding up applications is nothing but good for business – and research.
Opinions expressed by DZone contributors are their own.