Hawking: Machine Learning AI Our "Biggest Existential Threat"

DZone 's Guide to

Hawking: Machine Learning AI Our "Biggest Existential Threat"

Artificial intelligence programs can be the future of data analysis thanks to machine learning -- and they can also become our greatest threat if weaponized.

· Big Data Zone ·
Free Resource

This past Monday, Stephen Hawking along with Tesla's Elon Musk, Apple's Steve Wozniak, Noam Chomsky and Google DeepMind chief executive Demis Hassabis signed an open letter warning of the threat weaponized artificial intelligence can pose to civilization. Other signatories include members of the IBM Watson team and Google Robotics researchers as well as thousands of academics, researchers and other who share the sentiment. At the time this article was published, there were 10,276 total signatures; AI and robotics researchers comprised 1,747 of that number.

The letter, posted on the Future of Life Institute's site, bases its concern on the fact that, unlike nuclear weapons, the materials needed to create artificial intelligence are easily obtainable.

"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting," they state. "If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."

Separately, Musk and Hawking have both voiced their concern of the danger artificial intelligence could bring to society. Musk has equated tampering with such technology to "summoning the demon" beyond our control, and Hawking told the BBC that the “development of artificial intelligence could spell the end of the human race.”

But is the concern a little too much, too soon? Currently, the artificial intelligence employed by the military includes facial recognition machine learning, called Visual Media Reasoning (VMR). The program, developed by Defense Advanced Research Projects Agency (DARPA), learns how to best identify a threat by mimicking what analysts do. VMR is able to identify a target via an overview, then zooming into the detail to filter information and retrive that data as it needs.

"The goal of DARPA's VMR program is to extract mission-relevant information, such as the who, what, where and when, from visual media captured from our adversaries and to turn unstructured, ad hoc photos and video into true visual intelligence," U.S. Army Research Laboratory's Dr. Jeff Hansberger said. 

So, it's not exactly Skynet. But it is a technology that has also been employed by the NSA, who, according to documents leaked by Edward Snowden, has been using facial recognition technology to scour the internet for criminal investigations. Eventually, that process could be automated by a machine learning program that can match pictures on its own without human intervention.

As we've noted before on DZone, machines that can mimic the actions of humans can quickly take over the jobs of analysts, particularly when it comes to data analysis. And, such technology is already in development and has proved largely successful, even coming with a human interface as IPsoft's Amelia does.

To read the letter and add your signature, visit Future of Life Institute's page.

ai, artificial intelligence, big data, machine learning

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}