What's Stopping the Democratization of AI?
What's Stopping the Democratization of AI?
In this article, find out what's stopping the democratization of AI.
Join the DZone community and get the full member experience.Join For Free
With companies across industries waking up to the reality that adopting AI isn’t merely an option anymore, the question has shifted to how its adoption and implementation can be simplified. In other words, how does one break down the immensely tall barriers around the complicated world of AI and leverage the undeniable advantages it has to offer in terms of managing the scale and complexity of all the data that’s being gathered through the Internet of Things (IoT) already?
There’s no figment of doubt that it is indeed the need of the hour when every industry is fighting a losing battle with scale — the sheer magnitude of data streaming in from the millions (at times billions) of sensors, tools, and equipment.
While giants such as Google and Facebook have sufficiently fancy budgets to invest in AI, Machine Learning and leverage its advantages, how does the average company get a slice of the AI?
Currently, all data is merely being amassed, with little being done to translate into usable intelligence. Thus, data and people are siloed — not just that, any attempts at data analytics has so far usually been from an extremely myopic perspective. That is, it was done with either one tool or one team, with the result that one received a very localized perspective of a much larger context. For instance, a dashboard of results contains no trace of where insights have been sourced from, and a data table generated during one phase of a process may in all probability not be usable for any process further downstream.
Everyone’s talking about the democratization of AI and machine learning, about opening it up to the masses.
The unfortunate truth is, it’s the very same challenges that have led to the need for AI & machine learning that are preventing its effective adoption.
Let’s take a look at these challenges.
Need to Reduce Cycle Time
While a majority of the industries consider investing in machine learning to reduce the cycle time for their products/ services, the cycle time for the implementation of machine learning in itself is rather long. For instance, the process of gathering and cleaning data is lengthy and tedious — data scientists spend the majority of their time on this task.
The Great Skill Divide
Skill shortage is a common pain in nearly any industry. This challenge may be one of either insufficient supply or insufficient accessibility. Whatever the case, the use of “smart” machines helps solve the problem. However, the adoption of these smart machines requires another bunch of smart people — data scientists. Now, this opens up an entirely new realm of both skill AND shortage. For one, these guys are usually massively skilled (read extremely expensive to hire). For another, they’re painfully few in number (read infinitely more expensive to retain).
As organizations realize that remaining competitive in the marketplace hinges heavily on machine learning and artificial intelligence, there’s a huge demand for those trained in the field – far exceeding supply.
This is simply because AI, data science and machine learning are only being leveraged by scientists who have mastered the required number of crunching techniques. These scientists identify the right data, select the right algorithm, and create the right conditions for successful implementation. They go through a daily grind of brainstorming with business stakeholders to understand their requirements, data preparation (gathering, cleaning, and transforming the data into something meaningful), data modeling (creating, testing, and optimizing each model), and iteration (till results are satisfactory).
Need for Scalable Learning
One of the biggest reasons for companies to adopt machine learning is their need to deal with the deluge of data pouring in from their sensors and equipment. The knee-jerk reaction to this is to automate the processing of this data. However, often, this data processing is guided, or trained, by humans — this is what we call supervised learning.
This type of machine learning, unfortunately, doesn’t scale to address the range of concerns facing most companies today — concerns that are difficult for humans to predict. In fact, the 20:80 rule of asset failure* that plagues all industries remains unaddressed for want of scalable, or unsupervised, machine learning. In this type of machine learning, the machines themselves do the learning that would otherwise be trained by data scientists. The whole premise of this type of learning is that machines are capable of detecting patterns that are invisible to the human eye, and can, therefore, detect issues that humans can’t predict using manual approaches.
AI is not merely about the algorithms but the value that the algorithm generates. Therefore, the need of the hour is for all involved users to be able to navigate through the obscure landscape of information to gain the required intelligence that can be leveraged to drive business goals at each step they’re involved in.
Published at DZone with permission of Anita Raj . See the original article here.
Opinions expressed by DZone contributors are their own.