AI Thoughts and Worries: A Metaphor
AI Thoughts and Worries: A Metaphor
Even some of our tech-savvy friends are not quite sure what AI is. These are my thoughts on why some parts are just cool and why other parts might be scary.
Join the DZone community and get the full member experience.Join For Free
Often, AI is rolled up into one large, homogeneous, conceptual ball of smart, mysterious, powerful, scary technology. In reality, AI is a range of disciplines starting from the mundane and simplistic to the immensely deep and convoluted brain-like networks. Certainly, our current state of AI encompasses some manner of cleverness. There are types of cleverness, but there are also classes of intelligence that we can categorize much like we compartmentalize the types of human intelligence.
Today, it is generally recognized that human intelligence can be categorized into different classes: musical, linguistic, logical-mathematical, etc. Here is an interesting infographic by Mark Vital:
The cleverness that we witness in today's technology is a moving target. For example, the aircraft autopilot from decades ago seemed smart, powerful, and futuristic. It could fly a smoother and more fuel-efficient course between airports. But today, you can buy a pocket-size drone that you can toss in the air and ask it to visually follow you (and make a video keeping you in the frame) all the while avoiding obstacles autonomously and even obeying your hand gesture commands. Yesterday's autopilot seems more like a thermostat (it simply maintained altitude and course) than a pilot by comparison. Many of these previously amazing computer-controlled intelligent behaviors have devolved in our collective opinion to be rather ordinary functions that we expect computers to do.
So here's the way I break things down. AI falls into several (admittedly) slightly overlapping categories:
This lowest level of AI technology looks at a data set or a signal input and observes and reports a value corresponding to some characteristics of what it is observing. Another term for this might be sensors. At this level, I would include any instrument that measures something. Even something as simple as a mercury thermometer would qualify. Imagine living in 1715 and having Daniel Gabriel Fahrenheit show you an intelligent tube of glass that was smart enough to tell you not only that a pot of water was hot but exactly how hot it was. Today, we don't think of measurement instruments as being very intelligent, but they are all around us and too numerous to list (e.g. airspeed, radioactivity, light intensity, carbon monoxide concentration, gravitational waves, etc.). Even a simple ruler will indicate an intelligent answer to the question: What is the distance between point A and point B? We've long since forgotten the magic of these simple devices and just expect them to provide base level information. Generally, most people do not consider these devices in any way dangerous: they just observe and report. I propose the one category of AI should be called observers. It observes and reports a characteristic. The AI aspect of these observers is exhibited only in the subtlety and sophistication with which it observes. Today, many image-centric applications and almost all camera viewfinders automatically locate faces. The technology certainly involved massive amounts of abstract computation in order to learn how (train) to do that, but all it does is observe a picture and tell you how face-like each region is. In principle, it is not doing much more than a thermometer. It reports a single number representing face-ness at particular coordinates in an image. In the same way, other high-level AI technologies such as speech recognition just report observations: if given an array of numbers representing an acoustic waveform then generate a report [word = "frog", probability = 79%]. All these types of systems, no matter how sophisticated, just observe and report a value. Certainly cool, but not very scary.
Actors are directed to do some predetermined behavior. Like all good human actors, AI actors follow a script, and in addition, are expected to be aware of and adapt to the inevitable environmental variations that occur when interacting with props, the set, other actors, as well as audience response. Actors are playing a role. Actors are doing things. Since actors must be aware of their surroundings, they must include observer functionality, which informs about their environment and allows them to modify their performance in order to successfully complete their task. They don't create their role: They only interpret their role. Actors don't need to understand the actions they do, but they must do a convincing job of seeming as if they understand. Robert Downey Jr., playing the part of Tony Stark, probably has little to no knowledge about how to build an Iron Man suit or his AI assistant, Jarvis. But he behaves as if he does. Again, some things we take totally for granted today (things we don't believe are very intelligent at all) were once considered amazingly smart contraptions when they were first introduced. Imagine the thermostat in your house: It contains an observer to report the current temperature as well as the desired temperature control setting. It also contains some logic upon which to act (is the temperature above or below the manual setting). It uses reports from the thermometer and the control setting, and it acts by closing a switch that turns on the furnace. Imagine the fancy new thermostat that many people now have in their home: it has some additional observers that report what day of the week it is as well as what time of day. It also remembers what the human requested at those different times. The thermostat makes a more informed decision on how and when to heat your home. Some new thermostats include motion detection and can dial down the heat if no one is there. If there already isn't one, I predict very soon there will be a thermostat that does facial recognition and adjusts the temperature based on the specific occupant in the room. Even though these systems intelligently adapt and continuously learn by monitoring the situation they have no interest in or ability to modify their agenda. Their sole goal is to manage the home temperature at the "right" value. It may make mistakes (e.g. thinking the rumba moving around is a person) but it is not malicious. But who tells the actors what to do?
Screenwriters concoct interactive scenarios which actors perform. They generate scripts that reasonably represent a real-world activity. They generate a model of plausible interactions, and that model is usually a self-contained and relatively uncluttered issue that is examined and explored and finally resolved. Until recently, all of the devices that we would class as actors were programmed by human beings. Humans wrote some sort of procedural computer program-based on either Boolean logic or simple state-machines, which govern how the device would react to all of the expected environmental variables. Today, we are applying increasing amounts of artificial intelligence to manage these behaviors. Some of these AI machine learning approaches are employed just because it's cool and not that hard to implement (for instance an intelligent thermostat doesn't really require a complicated AI brain). Sometimes, these techniques are applied because the control problem is completely intractable using conventional programming techniques (driving a car on a busy city street using lidar, sonar, and video inputs to observe its environment). As time goes on, we will use AI more and more as screenwriters to author scripts for our actor devices. I imagine that Amazon is working on an AI screenwriter to write scripts for drones that will deliver packages to our homes. This AI will have to author scenarios for different kinds of weather, different kinds of packages, different kinds of destinations, etc. The actors in these scripts will have to adapt and improvise during their performance but their goal will just be to perform their role convincingly: Deliver the box.
Observers, actors, and screenwriters have a progressively richer involvement in addressing a goal. But even at the highest level of screenwriter the basic rules governing the activity are well defined and easily constrained. I believe the only time we should beginto worry is when our AI becomes a student. When the AI wants to learn things because those things seem interesting: when the AI decides whatis interesting. Of course, the student will have access to a number of sophisticated observers. The student will need to select the cast: the actors. The student should be able to use existing scripts to create a new screenwriter who can write plausible interactions between the observers, the actors, and the real world. The student will select and advise their new screenwriter on how to assemble the best script to achieve the goal, and the screenwriter will get better. Quite possibly, this student level AI may be a little annoying and likely will be a bit disruptive, however, not overtly dangerous. Danger arises when the student becomes an entrepreneur.
At this point, the metaphor runs off the rails and we approach an unknowable new world order. Entrepreneurs are clever, resourceful, well-informed, and most importantly, profit-oriented (they strive relentlessly to maximize their reward function). Once a system is driven by a sense of compensation andunderstands that compensation equates to more resources with which to accomplish its goals, then I believe we have a problem worth worrying about.
Until our AI becomes a student and develops a curiosity about things that it was not explicitly told to be curious about, then humans will be safe. Even once a student begins to create its own goals out of things that it has learned on its own, I don't think we need to worry very much. I don't think there's any way to stop researchers from opening AI's Pandora's box. But remember, no matter how much hype or sci-fi we are exposed to, we are nowhere near having students and/or entrepreneurs operating autonomously. Marvin Minsky advised Stanley Kubrick on the functionality of HAL for the movie 2001. That was 50 years ago! In 2018, we don't have any AI that is in any way comparable to the sophistication of HAL. We may have them in a decade or so...but that's what we all thought back in the lab in the 1960s when HAL was being imagined.
Telling people not to research these things may dampen some efforts, but it will not stop the efforts. I have dedicated much of my career to creating more intelligent, interactive, synthetic beings, and I always jump at the chance to join efforts to create these new systems. I enjoy this challenge and easily pour my heart and soul into it. However, I realize how hard the problem is and how far we are from achieving that goal.
Certainly, we must continue to worry about how humanscan cobble together devices, actors, and scripts to accomplish badthings. I can easily imagine that covert militaristic groups in a number of governments are working on things like a tiny drone with the camera and facial recognition (trained on my face?) Carrying a microsyringe filled with a concentrated droplet of a food extract to which I am highly allergic. A highly targeted, subtle, and evil weapon. But it is just an actor (albeit in a horror movie). People are the biggest problem when technology misbehaves today.
Given enough time (the 50 years since HAL is two full generations), humans learn to live with and benefit from new technologies. Aircraft started out as dangerous and were immediately employed in war efforts, but once understood and applied intelligently to real problems, the aircraft industry became key to transportation in today's world.
I feel the same way about AI.
Opinions expressed by DZone contributors are their own.