"I'm Sorry, Dave — I'm afraid I can't do that."
Many readers will recognize that line from Stanley Kubrick's 2001: A Space Odyssey, a film in which the onboard computer, HAL 9000, perceives an astronaut to be a threat to its "existence" and refuses to open the airlock to allow the crew member back into the ship. Other films like Ex-Machina, i-Robot, Terminator sow similar fears of Artificial Intelligence systems with cognitive capabilities taking control from humans, rendering us defenseless. Of course, there are also films that focus on the positive aspects of AI, such as Bicentennial Man.
My view is that AI systems are increasingly necessary to augment what we do in our everyday lives — whether that means...
...turning devices on or off, intelligently learning when and where to do so.
...repeating mundane tasks.
...giving us additional insights into human existence.
...guiding us toward better decisions.
So, why all the fear? Partly because there is so much misinformation and hype — and some people just like to sell fear, uncertainty, and doubt (FUD). And it's true that there will always be people who seek to exploit technology to do bad things — the dark side vs. the light side (Star Wars fans). Nonetheless, hype is a valuable part of the technology lifecycle. It allows us to consider use cases (sometimes extreme) that were not initially considered relevant.
What's clear is that machine learning in all its forms is here to stay. It has established its place in the world and particularly in business — from detecting and identifying trends and patterns faster and more often than humans alone could ever achieve (while learning and become progressively smarter as they go) — to helping predict outcomes and taking action to prevent fraud — to slashing the time it takes to design advanced cancer treatment programs and health programs (see Figure 1) — to anticipating terror attacks — to recognizing business opportunities that might last only a moment — to ridding processes of personal bias and prejudice. I believe that machine learning and AI systems have the potential to make our world a safer and better place.
Even so, the ethical side of machine learning is increasingly called into question. The potential of machine learning and its application to all things AI means we need rules and controls - not to prevent progress but to help manage and control how and when progress occurs.
Let's walk through some scenarios.
Machine Learning App Envy
What if machine learning algorithms are pitched against each other to win a battle, say a game of chess or other simulation? Not a big deal. An outcome could be defined as not losing to an opponent — establishing a win or at least a draw. But what if these machine learning systems were used in a war situation against each other? Human life, entire civilizations, and life itself are the stakes. Winning, in this case, could be defined as seeking an acceptable outcome while minimizing losses. That's why we humans need to be careful to avoid delegating 100% authority to an AI system in such situations.
Ending Life vs. Saving Life
There's general agreement that humans should have the final say where human life is concerned, but does that allow us to play "God" if an AI system demonstrates it can preserve life even though a human may believe it is best to end a life? While most humans would seek to preserve life, greed, personal bias, hate, jealousy can often be powerful dark forces that can be used to serve judgment. It is important that any decision involving an AI system must have an audit trail clearly showing a path to the outcome. After all, AI systems can learn from these outcomes also.
Non-Human Life Forms
Moving beyond humans, nothing stops us from applying AI to animal behavior. We have performed enough animal psychology over the years to think we understand animals. Would an AI system be better at training an animal? Would it be ethical in man's perceived superiority and domination over all other species to subject those species to AI? Again, we must consider under what circumstances AI can be used to make decisions over other life forms.
Conscience and Compassion
Today, limited by what we know of life, physics, and computing, AI systems are just computer models and simulations of human behavior. Could a network of AI systems have a conscience — even though it may be simulated? My personal feelings are unique to my life experiences so what makes me happy or reduces me to tears is different from other humans. Emotions are chemical reactions. AI systems are not. But what if AI systems could apply cognitive actions and outcomes to a bank of human chemicals in a controlled environment to learn about emotion? It is conceivable that an AI system could, therefore, develop a conscience and even compassion.
Big Religion is a phrase I hear more and more as Big Data became an established term. It means looking at scriptures and religions with other sources of data, events and the tools of science. It scares a lot of people for the challenge it might pose to their belief systems. Some may fear that also challenges the power and control associated with some religious establishments. Nonetheless, this is happening today and can't be stopped. Humans inevitably seek to more deeply understand the universe and world around us, challenging ourselves about what we perceive as the truth beyond our faith.
Spanning Cultural Divides and Value Systems
Diversity makes the world a fascinating place. It's one of the reasons many of us decide to vacation in different parts of the world to experience other cultures, food, traditions, languages. In doing so we learn more about history, different belief and value systems. I wonder whether AI systems built within different cultures with different value systems will behave differently with those of other cultures. Consider a global AI system that encompasses/embraces all of this diversity and difference. What might be the global impact on world leaders?
We Are One
With recent advances in nanotechnology, it's possible for nanobots to enter our bodies — even our bloodstreams — to attack viruses and potentially repair damaged bones and tissue. There may be a time where we can use nanotechnology to fight obesity or vainly to enhance our looks, our physical performance. If these nanobots exist forever in our bodies, do we become part human and part something else? There is a lot of research happening in the area.
Responsibility vs. Accountability
There are some things that humans must have the final say on. Checks and balances. How far are we prepared to go in delegating responsibility to the AI system? How well are the policies designed? Have some policies been designed or even adapted over time by machine learning? While machines could be responsible for sustaining or ending life, can they be held accountable — and if so, what are the legal implications? Today humans carry the burden of both responsibility and accountability. This aligns with our legal systems, but we can't put an AI system on trial — we just don't have the legal capacity to do that today. AI systems learn from human interactions and both the data we produce and the data it produces. Would that imply that many people would potentially be on trial should a legal case emerge involving AI systems? Could the AI system or its creators assert that the human legal system has no jurisdiction over it or that the legal system even infringes the rights of the AI system? Ethics in this area are just not mature enough today to give us clear answers. But it's only a matter of time before we encounter such situations.
Finally, we could ask whether it's ethical for AI systems to design and implement their own set of ethics. I guess my answer would be yes — provided humans remain involved and a can override any final outcomes where decisions involving human life and welfare are concerned.
AI systems already augment what we do and the decisions we make today. The human species will push the boundaries of machine learning, cognitive computing and AI systems beyond our current perceptions of its application through positive and negative exploitation that will ultimately result in AI systems capable of achieving outcomes beyond our imaginations. The ethics will only emerge as cases arise that test our legal systems, our value systems and even our belief systems. Despite some of the FUD we read, I believe that machine learning can help our world become a smarter, safer and better place for us and future generations — future generations of people and AI systems.
For more information on AI, cognitive computing and IBM research click here.