AI/ML 2019 Predictions (Part 2)
AI/ML 2019 Predictions (Part 2)
More operationalization of data leads to more use cases.
Join the DZone community and get the full member experience.Join For Free
Start coding something amazing with the IBM library of open source AI code patterns. Content provided by IBM.
Given the speed with which technology is changing, we thought it would be interesting to ask IT executives to share their predictions about what's in store for 2019. Here's what they told us about artificial intelligence (AI), machine learning (ML), and the other sectors of data science:
Advanced analytics and AI will be everywhere and in everything, including infrastructure operations. Analytics and AI will continue becoming more highly focused and purpose-built for specific problems, and these capabilities will increasingly be embedded in cloud platforms and management tools.
AI-driven infrastructure tools, for example, are now being used to analyze input from a myriad of monitoring and management tools. Many of these AI tools have endeavored to solve broad problems across the IT spectrum. In 2019 these will begin to evolve to become substantially more focused on the most critical problems — both the routine and complex — encountered by IT staff. This much-anticipated capability will simplify IT operations, improve infrastructure and application robustness, and lower overall costs.
Along with this trend, AI and analytics will naturally become embedded in HA and DR solutions, as well as CSP offerings, to enhance the robustness of their operations. With the ability to quickly, automatically and accurately understand issues and diagnose problems across complex configurations, the reliability, and thus the availability, of critical application services delivered from the cloud will vastly improve.
The AI revolution in the form of machine and deep learning (DL) continues to gather more and more traction in the software development industry. With the introduction of graphics processing unit (GPU) acceleration, previous time and computing restrictions are removed and new easy to use frameworks and data centers will make these technologies available to everyone in 2019.
In 2019, AI and ML will converge with automation and will revolutionize DevOps. For the past several years, the role of automation in DevOps has continued to gain traction as an important aspect of the larger practice. Currently, the focus has mainly been on automating manual repeatable tasks that are process or event-driven, but new advancements with AI/ML show change is on the horizon. Through the convergence of AI and ML, automation has the potential to showcase unprecedented intelligence as new systems will look at trends, as well as analyze and correlate across entire value streams to predict and prevent issues. As DevOps practices focus on increasing operational efficiency, this upcoming convergence of ML, AI and automation will present a significant advantage for companies using DevOps. New and adaptive automation systems will give companies a competitive edge as teams following DevOps practices will be able to make real-time decisions based on real-time feedback.
ML projects will move from science projects and innovation labs to full production led by industry disruptors in 2019. Virtually every company has ML projects, but most of them are reliant on specialty platforms that cannot access all of the data relevant to business objectives. All of the data is stored in a variety of data warehouses and data lakes, none of which have the ability to run end-to-end ML, forcing data movement to the specialty platforms. Only a subset of data is then used to train and score ML models, resulting in limited accuracy. In 2019, current industry disruptors and smart traditional companies will bring ML to all its data, instead of moving its data to the ML platforms. These companies will more accurately predict and influence outcomes, including predictive maintenance on medical devices, predictive revenue based on personalized customer behavior analytics, and proactive fraud detection and prevention.
AI and automation will change the economics of IT. Much of DevOps is still driven by people, even if the infrastructure itself is becoming programmable. But data volumes are growing so fast and applications evolving so quickly, the infrastructure must be nimble enough not to become the bottleneck. We’ve already replaced many storage and network admins. In 2019, infrastructure will become increasingly programmable, and AI-based machines will predict storage and compute needs and allocate resources automatically based on network conditions, workloads and historical patterns.
AI/ML will continue to grow in importance in 2019 within all sectors as business attempt to streamline their operations and make the “machines” do all of the heavy lifting when it comes to making inferences based upon big data.
Increased security measures to deter AI weaponization. Implementing automation and IoT in our lives has plenty of benefits, but the opportunity for weaponization is also present. This isn't unique, though, as all technology has faced weaponization in the past. What's absolutely necessary for the future is for data scientists and security experts work in unison to ensure safety and protection.
Karen Inbar, Solution Marketer, NICE
Robotic automation will inspire new jobs. With the onset of robotic process automation (RPA), new roles are being developed within the organization. In 2019, more companies will adopt new specialized positions and roles such as RPA engineer, RPA architect, and RPA consultant, to help employees understand RPA best practices and how RPA can augment workflows. New job titles such as “Chief Robotics Officer” will start to crop up too as RPA technology becomes more popular and attractive in the workplace.
Companies will be more selective about which processes to automate. Many automation projects in 2018 failed because they were targeting the wrong processes to automate. In 2019, companies will pay more attention to the number of users for any given process and more closely assess the time allotment and complexity of tasks. This strategic reprioritization of automated tasks will ensure that organizations are driving ROI and success when it comes to digital transformation efforts. Once organizations master the automation of more simplistic tasks, they can start bringing in more advanced technologies, such as Optical Character Recognition (OCR), enabling more elements of the data to be interpreted by unattended robots.
By now we know data sets used for ML are getting larger every year. Not only cumulatively, but also because the sources (cameras, IoT sensors, software logs, etc.) are growing more numerous and increasing in resolution. Simply expecting data growth for the usual reasons isn’t much of a prediction.
I could also “predict” that ML researchers will harness ever-greater numbers of increasingly powerful GPUs to feast on unprecedented quantities of data. But that’s just an observation of current trends and not much of a prediction.
Instead, let’s look at how these ever-larger bodies of data will be used. I can predict with ML training relaxing parameters to allow software to reduce training errors (as some of the best minds in AI research anticipate), there will be consequences for the storage infrastructure supporting the ML computational environment. Here’s why.
There is a sea change sweeping across the field. DeepMind, a leader in ML (owned by Google/Alphabet), recently published a significant paper: “Relational inductive biases, deep learning, and graph networks.” One key takeaway is that ML training may evolve a more freehand approach, allowing its software to affect selection criteria (via patterns of reasoning) for its learning pathways. This will impact the data storage infrastructure.
Within any large data set today we have a “working set” — the subset of data that is most active, most often dominated by the most recent data. For example, of all the many Petabytes a group of ML researchers may have accumulated to use for training, the usual situation in their datacenter is that only a few hundred Terabytes of that total pool of data would be promoted from slower “cold” storage so that their GPUs could access it in a “hot” tier of fast storage. However, with this sea change, it will become difficult to determine a priori which data should be members of any given working set. Instead, it may become appropriate to treat the whole thing as potentially necessary. Each of the various ML events-in-progress would choose differently from among all those Petabytes, and that would dictate all data be placed in the hot tier.
We have already started to see marketers increasingly taking an AI-first approach to create truly intimate and hyper-personalized customer experiences. This has been made possible by using ML and natural language processing (NLP) techniques to build a detailed cognitive profile of each customer or prospect, rather than just building common customer databases. In 2019, we will see AI come to the forefront in marketing like never before.
AI skills continue to not scale (people) requiring platforms that can automatically perform AI functions (model management, model maintenance, model retirement, and replacement).
Most AI features of mainstream tech products will fail to get past being a novelty. However, healthcare will make leaps and bounds in terms of using AI. For example, AI-diagnosis will start getting mainstream acceptance; much faster than people currently think.
Facial recognition will become the new privacy frontier and Facebook will be at the center of the storm. If Facebook sells your facial image to a retailer, who then detects you in-store, and says “Hi XX ... you want to buy ...” that’s definitely crossing a line. Consumers are accepting of sharing data, however, using people’s actual face is a step too far at the moment. This will help accelerate tech regulation around AI.
Chief Analytics Officer (CAO) and Chief Data Officers (CDO) will need to supervise AI. There are myriad decisions that must be made when a company extends their use of AI. Implications exist for privacy regulation but there are also legal, ethical, and cultural implications that warrant the creation of a specialized role in 2019 with executive oversight of AI usage. In some cases, AI has demonstrated unfavorable behavior such as racial profiling, unfairly denying individuals loans, and incorrectly identifying basic information about users. CAOs and CDOs will need to supervise AI training to ensure AI decisions avoid harm. Further, AI must be trained to deal with real human dilemmas and prioritize justice, accountability, responsibility, transparency and well-being while also detecting hacking, exploitation and misuse of data.
Explainable AI will become a requirement, especially for financial/banking and medical industries. If AI makes a medical recommendation for an individual’s health or treatment, the doctor must be able to explain what logic and data was used to reach that conclusion. We are not yet at a point in our relationship with AI where many people are willing to take medication or have surgery because of a recommendation by AI, especially if the involved medical professional can’t explain the “why” of its recommendation. In the financial industry, we’ll see the use of automated analysis and cognitive messaging to provide financial guidance and investment recommendations on stocks, bonds, estates and other assets based on customers’ needs. Here, too, consumers will require an explanation of a decision based on AI.
Opinions expressed by DZone contributors are their own.