{{announcement.body}}
{{announcement.title}}

OpenAI Withholds Release of New Language Model Because It’s Too Good

DZone 's Guide to

OpenAI Withholds Release of New Language Model Because It’s Too Good

The company fears its new large language model, GPT-2, could be used to generate abusive or deceptive language at scale.

· AI Zone ·
Free Resource

Chat Bot

Photo credit Flickr/James Royal-Lawson

Language models have come a long way since the first chatbots tried – and consistently failed – to sound human. As those systems have become more sophisticated, however, it’s started to take more than just a glance to determine that something was written by a machine mind. And as social media has become a tool of enemies both foreign and domestic, the risks of mechanically-created and spread disinformation loom large, which is what led OpenAI to make a highly controversial decision.

OpenAI, a non-profit artificial intelligence research company, chose to withhold the bulk of its new large language model, GPT-2, because it worked just a little too well. The organization made the following announcement:

“Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights,” the organization said last week on their blog.

Criticism was swift. One AI researcher, writing for Forbes, responded:

“Perhaps a new name such as ‘OpenAI (sometimes)’ might be more in line with its new mission. Its new stance goes against the general spirit of open-source which has been instrumental in driving the breakneck speed at which AI research moves.”

Indeed, OpenAI was conceived with the belief that “AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.” As the company further explained in their introductory blog from 2015, “Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.”

While OpenAI conceded the positive societal impact its technology could have – from more capable chat bots to improved language translation systems – the company also argued that it would be remiss if it didn’t consider the damaging alternatives.

Just how detrimental could these alternative uses be? Let’s look at an example.

When provided with a prompt, GPT-2 “adapts to the style and the content of the conditioning text…[allowing] the user to generate realistic and coherent continuations about a topic of their choosing.” Case in point:

Human-Written Prompt

“In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.”

Machine Response

“Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America. While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

All humor aside, it doesn’t take a whole lot of imagination to see the potential problem of such easily disseminated misinformation.

While OpenAI has yet to definitively decide against the full release of its code, they see this delay as an important experiment. “Other disciplines such as biotechnology and cybersecurity have long had active debates about responsible publication in cases with clear misuse potential, and we hope that our experiment will serve as a case study for more nuanced discussions of model and code release decisions in the AI community.”

In the meantime, researchers can certainly glean insights from the company’s technical paper, detailing their methodology behind building a sufficiently large and diverse dataset for the model and the “surprising amount of tasks [the machine can learn to perform] without the need for explicit supervision.”

Topics:
artificial inteligence, natural language processing, open source, openai

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}