Are We Making AI Inefficient by Molding It to Human Thinking?
See whether we are making AI inefficient by molding it to human thinking.
Join the DZone community and get the full member experience.Join For Free
Artificial Intelligence (AI) continues to make leaps and bounds in terms of innovation and results. Recent developments in AI and the technology behind it have certainly surprised a lot of stakeholders, including AI researchers themselves. AI has grown to be a very capable entity, affecting everything from our social media feeds and what we watch on Netflix to bigger solutions like smart cities.
On a smaller scale, AI implementation for customers — end users — is gaining a lot of traction. Google, for instance, wowed everyone with Google Duplex. AWS is ramping up its AI research too. Cloud service providers like Amazon are making GPU-intensive instances available to more AI researchers.
AI development isn’t without its challenges. One of the biggest and most recent questions asked by AI researchers is whether we are making AI inefficient by molding it to human thinking. There are some interesting problems that triggered this question too.
Machine Learning and Artificial Intelligence
Before we dive into those questions, however, we need to take a look at the basic concepts of artificial intelligence. AI learns about data streams through machine learning. There is one important point to understand here: AI doesn’t have the ability to process information without learning.
Human input is still needed at different stages of machine learning. When a vision AI needs to learn about how to differentiate male from female, it needs data streams fed to it manually by human operators. Those data streams, usually containing thousands of photos or videos with parameters attached to them, aren’t always neutral.
The only difference is that AI doesn’t always require a predetermined set of parameters to start learning. It can process data streams independently, find similarities and patterns along the way, and then make decisions based on what it has learned from those data streams.
Deep learning takes the process a step further by enabling independent learning in a more continuous and manageable way. Rather than requiring relevant data streams for each implementation, deep learning allows AI to implement the parameters and patterns it has learned from other applications to new problems.
Machines Thinking Dynamically
The components mentioned earlier — machine learning and deep learning — make it possible for artificial intelligence to think beyond the confines of its programming. Combined with neural networks — a network of computers designed to mimic the human brain — AI can branch out to new implementations and work on solutions to more problems.
It is basically machine thinking in a dynamic way, similar to how we think in a dynamic way. The nature of machine learning — which requires human input — makes AI learn things in a similar way as us humans, albeit at a much faster rate.
So, is AI operating in an inefficient way because it mimics how humans think? Does our approach to developing artificial intelligence restrict how it can grow? Answering this question is not as easy as it seems.
When fed with data streams containing creativity, AI can learn about human creativity. In fact, we already have AI entities capable of creating art, solving problems in a more unrestricted way, and even mimicking the way we communicate with each other. The demo of Google Duplex using fillers like “uhm” and “ah” in an AI conversation over the phone with a local business was incredible indeed. However, the approach also has a downside.
Bias In AI
That brings us to our next point: how AI is becoming biased in the way humans are. Since the learning process of artificial intelligence entities begins with human operators feeding data streams for learning purposes, AI entities develop biases based on the data streams they study.
Experts believe that there are two sources of bias AI: biased learning data and a biased data gathering process. Biased learning data is closely tied to the human operators developing AI entities. This is a problem that is both easy and difficult to fix. For the AI to be neutral, the human operators assisting its learning process need to be neutral. Unfortunately, people are rarely neutral, and even the slightest bias gets amplified over time.
The second source, biased data gathering, is even more complex. This is because AI and human operators cannot fully realize the presence of bias as they collect more data. Similar to the previous issue, a slight taint in method or view gets amplified over time. Yes, AI learns in a dynamic way, but it still follows a pattern one way or another.
That brings us to what experts now believe as the norm for AI development: AI cannot be neutral. Yes, AI should be neutral, but every component of its learning process needs to be neutral (and ideal). This isn’t a learning process that can be achieved at this point.
Will this bias — the fact that AI is mimicking human thinking — affect the growth of AI? My personal belief in answer to that question is no. After all, we’re already so far ahead of what many believed was possible. More breakthroughs will surprise us in the near future.
Published at DZone with permission of Narendar Nallamala. See the original article here.
Opinions expressed by DZone contributors are their own.