Over a million developers have joined DZone.

Paper Explores How an Evil AI Could Emerge

Adi Gaskell shares a paper in which researchers detail their attempts to create malicious intelligence and share their findings about how it could happen.

· Big Data Zone

Read this eGuide to discover the fundamental differences between iPaaS and dPaaS and how the innovative approach of dPaaS gets to the heart of today’s most pressing integration problems, brought to you in partnership with Liaison.

It’s clear that when we develop so called superintelligence, it will be capable of things that we can scarcely conceive. Indeed, it is largely these unknown unknowns that pose the biggest risk when it comes to the development of AI.

Nevertheless, this will not, and should not, prevent us from attempting to mitigate some of the risks presented should AI go bad.

As part of this process, researchers have attempted to deliberately create a malicious intelligence. In a recently published paper, they explore the kind of environment that will see a malignant machine emerge.

Malignant Intelligence

They identify a number of factors, with the most important one being a lack of any kind of regulatory oversight, either inside the company or more widely.

“If a group decided to create a malevolent artificial intelligence, it follows that preventing a global oversight board committee from coming to existence would increase its probability of succeeding,” they say.

They go on to suggest that the success of this plan might be improved by downplaying the progress that’s being made in their work, or the risks involved with highly developed AI.  The overall aim would be to sow seeds of doubt in the minds of the public about the dangers of AI.


A second significant condition would be a closed-off source code.  The authors suggest this is not only due to the lack of transparency this would afford but also because closed systems tend to have lower security levels than more open-source systems. This in itself would pose a significant risk.

Suffice to say, there are equal risks involved in open source code, with this transparency both a boon in opening things up to many more eyes to look for improvements, but also to crooks looking to manipulate matters.

Thus far, the majority of advances have come via closed systems from company’s such as Google and IBM, with the primary open source alternative coming via Elon Musk’s OpenAI, which very much adheres to the recommendations outlined in the paper.

With Musk also having provided funding for the work of one of the authors of this paper, it could easily be argued that there is an agenda behind their work.

Whilst I wouldn’t say that the paper provides a definitive answer, it does hopefully spark a conversation about just what is the best way to develop a technology that is likely to have a bigger impact on humanity than ever known before.

Discover the unprecedented possibilities and challenges, created by today’s fast paced data climate and why your current integration solution is not enough, brought to you in partnership with Liaison

artifical intelligence,big data,research paper,source code

Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}