Over a million developers have joined DZone.

Doing Battle: Adversarial AI

DZone's Guide to

Doing Battle: Adversarial AI

Learn about new technology that makes it possible to induce specific hallucinations into your speech recognizer.

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

We all know that today's state-of-the-art AI can learn all sorts of things. It can learn to recognize kittens in YouTube videos or, perhaps more importantly, how to characterize traffic around an autonomous vehicle. All of us use speech at some point talking to our phones, our Google Home, our TV remote, or our Amazon Alexa. That's thanks to excellent AI converting the audio signal representing our voice into words and higher-level semantic structures. Of course, it does many other things to like stock predictions, robot walking, delivery routing... there are just too many to list. But most of what we read about in the regular press involves vision and occasionally human speech.

And mostly the stories we read about are positive and useful applications of this technology. These systems learn tasks that can help us by offloading tedious work to the computer. But recently, a substantial amount of work has been done using machine learning in order to learn how to fool other machine learning systems: how to convincingly lie to them. This type of research has become prevalent enough that we now have a name for it: adversarial AI. Of course, adversarial AI was probably inevitable because of the normal tension in humanity between cooperation and competition. All of the early AI was focused on helpful cooperative and assistive approaches to apply all of the new machine learning techniques being discovered. But now that many of them are working better than expected, some people are researching how to deceive them. How to make them fail.

In fact, this new adversarial AI is more insidious than just trying to make a system fail. These adversarial systems are being developed to subtly alter normal inputs such that humans doing the same task can easily recognize what the intended input is (i.e. furry kitten) but mislead the AI into giving a predictable and very different false output (i.e. iguana). These kinds of systems are popping up all over, so I'm sure I will be writing about more specifics in the future, but for starters let's focus on a very interesting adversarial speech recognition system.

Most of us regularly speak to computers and expect them to understand a wide range of requests. I don't know about you, but I almost always use speech to ask Google Maps to navigate to some place or ask Amazon (the Alexa) to order some more printer ink, or I ask Siri to call my wife at work (and perhaps craft a follow-up text if she doesn't answer). Most of us may not realize that very few algorithms supply most of the recognition for all of the applications that use speech. I don't think there are published numbers for how many utterances each cloud-based recognizer processes per day, but my guess would be that between Google, Apple (Siri), and Amazon (Alexa), they cover the lion's share. There are several other cloud-based recognizers that do most of the balance of the recognition, i.e. Microsoft (Cortana), Nuance (embedded lots of places), IBM Watson. The point is that very few unique speech recognizers do the majority of all the utterance processing. This is important because the adversarial approach is an attack against a specific system. It discovers and takes advantage of small, unique flaws in their algorithms (think of the Death Star and those tiny vents to space).

The first adversarial systems merely added random and almost undetectable noise to the input signal and demonstrated that humans could still understand the intended speech perfectly well, but that speech recognizers would come up with substantially different transcriptions. That was interesting but not particularly useful. It was kind of like defacing the Mona Lisa by painting bushy eyebrows a mustache. But the next steps are more interesting. Suppose you set up a machine learning environment in which the machine learned how to add noise to the signal in such a way that it would generate a specifically incorrect transcription? Well, surprise, that's what researchers did. They used machine learning to learn how to deceive machine learning.

Nicholas Carlini and David Wagner recently published a paper about this: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text.

They make the claim "given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose".

This is a very technical paper and it describes the mathematical techniques they use to minimize the distortion of the input waveform while guaranteeing the desired (false) output. If you're not that interested in all of the math you can skim past the first five pages and read some of their observations and conclusions (which are relatively readable to the lay science person).

One observation that I found interesting was their investigation of whether compression of the audio would reduce the effectiveness of the desired deceit. (It's well-known that for adversarial image processing that JPEG compression and re-expansion severely damages the intended deceitful effect.) As it turns out MP3 compression does not seem to affect the results of the deception. What will your Pandora playlist say to your Google Home?

I'm sure many of you are already thinking of the myriad of ways this could be used to nefarious ends. Imagine if your local furnace company TV commercial processed their innocuous advertisement audio to tell your smart thermostat to turn off the heat and turn on the air-conditioning. But, of course, our very good friend XKCD has already thought about this:

xkcd creamed corn

Source: XKCD

Just another thing to worry about. You're welcome.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

machine learning ,ai ,ethics

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}