Regulating Artificial Intelligence — Definitely, Maybe
Regulating Artificial Intelligence — Definitely, Maybe
Read this article in order to learn more about Artificial Intelligence and what we are trying to do to implement regulations.
Join the DZone community and get the full member experience.Join For Free
Bias comes in a variety of forms, all of them potentially damaging to the efficacy of your ML algorithm. Read how Alegion's Chief Data Scientist discusses the source of most headlines about AI failures here.
Statista has it at $60 Billion by 2025, the McKinsey Global Institute puts its between $644 Million and $126 Billion by 2025 and PwC predicts it to be $15.7 Trillion by 2030. There are many such predictions, each with a different number, but the common element is that each of these numbers for the size of the global Artificial Intelligence market is profanely high.
However, with Artificial Intelligence, since the potential for a positive impact is life-altering, so is the associated risk. These risks are arguably probable (the degree of the probability is anybody's guess) and have been so aired by the likes of Elon Musk and the late Professor Hawking, most prominently.
To the question of "To AI or Not to AI," the huge investments being made by Enterprises and Governments offer a resounding "Yay" vote in favor of advancing Artificial Intelligence systems further. This, of course, is because of the projected revenues from AI but also partly because of a strong FOMO (Fear of Missing out). And like any other nascent industry, the known benefits currently outweigh the known and unknown risks. However, to be fair, I don't think anyone/very few are arguing against developing Artificial Intelligence systems further.
So, given that Artificial Intelligence is here to stay and grow and also that there are potential risks associated with it, the real question is, "How do we manage the risks associated with Artificial Intelligence systems?" Through regulations or industry initiatives.
To that, I cast my humble vote in favor of regulations. I am certainly not in favor of bulky government regulations, but we will require some industry standards and frameworks to be put in place, guided by pertinent laws & legal mandates. As I attempt to explain later in the article, I believe we have the regulatory building blocks of what we need already in place or they are coming soon. I would, of course, advise that the consumption of these thoughts be accompanied by a saline diet as per your discretion.
Before we discuss what we need, let's take a look at what we already have.
The Laws: From Asimov to Etzioni
Let us start with the famous Three Laws of Robotics, laid down by Issac Asimov in 1942:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Makes sense, right? Actually, the fault lines start showing when we delve a little deeper. For instance, having junk food "harms" me and so does being exposed to a nuclear explosion. Yet, while we want the AI system intervening in the latter scenario, we might not want the AI to restrain someone from having a greasy burger. Similarly, the "sacrificing one's life to save a million lives" situations where shades of grey come into play question the comprehensiveness of Asimov's laws.
So, let us then look at something more recent — the three rules of Artificial Intelligence presented by Oren Etzioni in September 2017 in a New York Times op-ed inspired by Asimov's laws:
1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
2. An A.I. system must clearly disclose that it is not human
3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.
The first rule makes both moral as well as business sense and is already being practiced. Today, all self-driving car systems follow the traffic rules laid down for humans, and similarly, all drones follow the FAA regulations. Logically speaking, for their mass adoption and assimilation of AI systems in the human day to day life, it's imperative that they follow the same laws, otherwise, they will not fit in. So, it is a reasonable assumption to make that (as long as the first rule is being followed in designing them) the current legal framework should suffice to ensure the safety of humans, at least for the near future.
In my view, the need for the second rule is slightly nuanced. It becomes more pronounced when the AI systems go beyond recommendations or the nature of the recommendations go beyond benign decisions (like filtering spam mail or movie recommendations). We need the second rule so that when we are presented with a recommendation or the AI system acts out a recommended action, we can make up our own minds about it. If we find it "not humanly" (the way a moral human would do it), we should be able to take a corrective action. This is probably why Etzioni also recommends that every AI system should come with a "kill switch" or with some clear mechanism to over-ride its actions or recommendations.
Coming to the third rule — this is where I think a lot of the future work around regulations, especially in the AI industry, will happen. This is because, at the risk of an oversimplification, Artificial Intelligence systems are basically complex functions of the data their algorithms are trained on and of what they derive and use at runtime. Towards regulating how systems gather and use data, we are already seeing the building blocks for that in the form of GDPR (General Data Protection Regulation) and the ePrivacy Regulations, especially in the EU and the UK. These regulations entail clear specifications around how data is gathered, who has access to it, and how it is used. Again, once we have these new data laws in place and evolving, we might not need a massive set of new laws specifically for AI.
There is, of course, the pertinent question of practically applying them, and this is probably where additional regulations might be needed.
HOW: In the Immediate to Short Term
We are a sum of all that we learn. While it might sound like fortune cookie wisdom, that is indeed the fact — all our good habits, morals, values, reactions, information, decision making, etc. stem from what we were taught and what we learn and pick up on a daily basis. This is true for Artificial Intelligence systems as well.
The behavior of all our Artificial Intelligence systems irrespective of the use case or the algorithms are influenced by the data the models are trained on. For instance, the Nightmare Machine, a project by MIT, turns normal and benign photos into scary ones because it was trained on scary photos. If it had been trained on pictures of fairies and unicorns, its output would have been more pleasant and palatable. Supervised learning systems (the majority of the machine learning systems today) are trained on labeled data based on how they process live inputs and make predictions and recommendations. If one was to consider Reinforcement Learning models like the DeepMind model that learned how to play Atari on its own, even there it played against a defined set of rules and arrived at its optimal policy based on the rewards/disincentives accrued based upon its chosen actions. So, the training data/system defines the nature of the AI. A Reinforcement Learning Dialog system, if disincentivized during training for making racist remarks, is highly unlikely to make race-based pejorative remarks during runtime.
The point being, if we are able to regulate the data and the datasets that Machine Learning models train on, we can regulate their behavior and performance. Hypothetically, if there was a regulatory mandate that every model, irrespective of the business process supported and associated functional datasets its gets trained on, must additionally get trained on certain sets of 'moral' datasets/learning setups that not just incentivises correct behaviour but clearly disincentivises the bad, for the AI model ( the way we teach moral science and good behavior to a kid, if you will,); and that such train be essential for it to get 'certified' as safe & ready for live operation - we could control the behavior of these systems. Consider it to be like the AI system having to go through the basic civic training, in addition to the standard education, before becoming eligible for live operations.Exactly like the life training and education we humans undergo.
And in addition to that, if we can regulate the data the Artificial Intelligence systems can gather whilst in operation - so that the policies don't change through continuous learning, coupled with the mandated design feature of manual over-ride (2nd and 3rd rules) — we can influence the degree and the nature of impact it can have on the lives of its human users (the "greasy burger" problem).
If the notion of teaching AI systems human ethics and values sound theoretical, I assure you it's not. Here are some of the AI Principles listed down by the Future of Life Institute. The institute has Jaan Tallinn (co-founder of Skype and Kazaa) as one of its founders, alongside other MIT and DeepMind luminaries.
As to the design and certification of these "moral" datasets and design mandates, my submission is that it would have to be championed and delivered by global Industry standards bodies (in some cases specific to a particular vertical) just like what we have today for the standards and regulations for the Telecom, financial services, and IT Services sector. Such frameworks would, of course, need to be guided by the data protection and other pertinent laws.
To the question of how do we regulate a self-aware AI system, the honest answer is, I don't know. Nobody knows for sure simply because in the hypothetical future of singularity, with AI systems becoming self-aware, self-governing and self-creating, we really don't know what would be their self-defined incentives and goals or what would cause them "pain" or "fear". Without that, it's difficult to devise a regulatory system that disincentives them for causing mental or physical harm to humans and designing the appropriate override mechanisms. The hope and logical punt is that the subsequent derivatives evolving from the moral AI systems of today/near future will also remain moral and benevolent.
Because of the future gazing involved, it has not been an easy piece to pen down, but presented with the challenge, I have attempted to share my thoughts as plainly and succinctly as I could and kept my guiding assumptions as logical as possible. If anything else, I do hope it has provided you with some food for thought, and as always, if you have any comments and suggestions on how the topic could have been handled better, please do feel free to share your comments. I shall be happy to include them in the post.
Disclaimers: The above post in no way claims any copyright to any of the images or literature presented.Images courtesy: Statista, Pinterest.co.uk, Pixwords Answers, yokota.af.mil
Published at DZone with permission of Somnath Biswas . See the original article here.
Opinions expressed by DZone contributors are their own.