It’s clear that when we develop so called superintelligence, it will be capable of things that we can scarcely conceive. Indeed, it is largely these unknown unknowns that pose the biggest risk when it comes to the development of AI.
Nevertheless, this will not, and should not, prevent us from attempting to mitigate some of the risks presented should AI go bad.
As part of this process, researchers have attempted to deliberately create a malicious intelligence. In a recently published paper, they explore the kind of environment that will see a malignant machine emerge.
They identify a number of factors, with the most important one being a lack of any kind of regulatory oversight, either inside the company or more widely.
“If a group decided to create a malevolent artificial intelligence, it follows that preventing a global oversight board committee from coming to existence would increase its probability of succeeding,” they say.
They go on to suggest that the success of this plan might be improved by downplaying the progress that’s being made in their work, or the risks involved with highly developed AI. The overall aim would be to sow seeds of doubt in the minds of the public about the dangers of AI.
A second significant condition would be a closed-off source code. The authors suggest this is not only due to the lack of transparency this would afford but also because closed systems tend to have lower security levels than more open-source systems. This in itself would pose a significant risk.
Suffice to say, there are equal risks involved in open source code, with this transparency both a boon in opening things up to many more eyes to look for improvements, but also to crooks looking to manipulate matters.
Thus far, the majority of advances have come via closed systems from company’s such as Google and IBM, with the primary open source alternative coming via Elon Musk’s OpenAI, which very much adheres to the recommendations outlined in the paper.
With Musk also having provided funding for the work of one of the authors of this paper, it could easily be argued that there is an agenda behind their work.
Whilst I wouldn’t say that the paper provides a definitive answer, it does hopefully spark a conversation about just what is the best way to develop a technology that is likely to have a bigger impact on humanity than ever known before.