Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using a Crowd to Make Robotic Conversations More Natural

DZone's Guide to

Using a Crowd to Make Robotic Conversations More Natural

Even when responses are supposed to be context-specific, a machine may have far more information than it chooses to express in its dialogue.

· Big Data Zone
Free Resource

Learn best practices according to DataOps. Download the free O'Reilly eBook on building a modern Big Data platform.

Despite AI-based personal assistants getting progressively better in recent years, there is still much work to be done before machines are capable of having reasonable conversations with us.

A recent study done by a team from Disney Research is interesting, as they utilize crowdsourcing to help robots improve their conversations.

Central to the project was something known as a persistent interactive personality (PIP). This is a construct used to translate high-level goals into some simple narratives that can be used to summarize a situation.

These summaries were then presented to workers recruited via the crowd to produce the appropriate speech for such a situation in a single line of dialogue (or in non-verbal dialogue if more appropriate). These lines were then evaluated by a second pool of crowd workers to finally arrive at an optimum pool of responses.

A Richer Narrative

The team believes that this kind of approach can rapidly expand the range of expressions that can be meaningfully deployed by machines, whilst also providing a scalable way of expanding and updating their dialogue.

The recruited volunteers were asked to judge whether a response made sense, and then to score it for overall quality. They were also asked to highlight whether particular words needed emphasizing or whether expressions should be tinged with a particular emotion.

It’s something that’s known as semi-situated learning. This refers to the fact that whilst responses may be context-specific, the machine may have far more information than it chooses to express in the dialogue.

The experiment saw the dialogue generated by the crowd fed to a robot quizmaster, who was positioned both in an office and at a couple of public events. The players in each environment were played identical dialogues each time they played, but the office workers were more likely to have their dialogue repeated as they were instructed to play the game more often. However, despite this fact, the robot quizmaster was smart enough to vary it’s dialogue sufficiently that none of the office players heard the same dialogue twice, despite playing the quiz over 30 times.

“We didn’t expect people to like the quiz game quite so much,” the authors say. “PIP can notice if language with a particular user is getting repetitive; if we had let PIP use this feature, it would have updated its own dialogue model in response.”

The next stage is to expand things so that the PIP is used to work in a much broader range of areas than simply quiz games.

Find the perfect platform for a scalable self-service model to manage Big Data workloads in the Cloud. Download the free O'Reilly eBook to learn more.

Topics:
robotics ,machine learning ,artificial intelligence ,big data

Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}