Update to Opus Research’s Intelligent Assistance Landscape

DZone 's Guide to

Update to Opus Research’s Intelligent Assistance Landscape

So many intelligent agents, so little time to investigate them ... Or even count them! The people at opus research have digested this information for you.

· Big Data Zone ·
Free Resource

Last week the team at Opus Research published an update to the Intelligent Assistance Landscape. This update represents the first major revision since the landscape was first published in partnership with VentureBeat last fall.

This new version includes updates to the industry players that populate various categories across the landscape. Opus has also refined the categories themselves. If you haven’t seen the landscape or had a chance to delve into it, here’s a quick synopsis.

Intelligent Assistance Landscape

Click here to open a full view of the landscape.

The top half of the diagram identifies core technologies that enable intelligent assistance. Opus distinguishes two main groups of enabling technologies.

Conversational technologies underpin the natural language exchange between humans and machines. Speech I/O services facilitate the understanding of spoken words and enable machines to talk. Text I/O services support natural language input and understanding via text. This category can also include dialogue management services and chatbots. Avatars provide embodiment for intelligent agents, while emotion and sentiment analysis enable software to interpret and act upon knowledge of human emotions and context.

Intelligent Assistance technologies are the powerful core services that help machines understand meaning and intent and learn how to serve us better. These technologies include Speech Analytics, Natural Language Processing, Machine Learning, Semantic Search, and Knowledge Management.

The bottom half of the Intelligent Assistance Landscape provides a taxonomy for the various types of smart assistants. While the terminology used for these services is fluid, Opus Research has put a stake in the ground by establishing specific criteria for each category.

Opus defines Mobile and Personal Assistants as smart agents that understand us and whose primary purpose is to help us control the smart objects around us. Assistants such as Siri and Google Now, for example, activate functions on our mobile phones, Amazon’s Alexa controls objects in our smart home, and assistants in cars control the features of our connected vehicle.

Personal Advisors focus on helping us manage complex tasks. These assistants tend to be more specialized and they are generally product agnostic. For example, a specialized personal travel advisor can assist with planning and booking trips and they suggest products and services from a wide array of providers.

Virtual Agents and Customer Assistants are customer-facing, self-service assistants. These assistants represent one company or brand. Their knowledge of the company’s products and services is typically fairly broad and they focus on providing information that customers ask most frequently.

Employee Assistants help people do their jobs within an enterprise. These assistants are generally integrated with the enterprise software applications that employees rely on most and they can also aggregate information to make it more readily available.

The domain of intelligent assistants is gaining increasing attention. The update to Opus Research’s Intelligent Assistance Landscape adds some insightful clarity around this complex topic.

alexa, cortana, google now, siri, virtual assistant

Published at DZone with permission of Amy Stapleton , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}