{{announcement.body}}
{{announcement.title}}

Conversational AI: Design and Build a Contextual AI Assistant

DZone 's Guide to

Conversational AI: Design and Build a Contextual AI Assistant

See how to design and build a contextual AI assistant.

· AI Zone ·
Free Resource

Though conversational AI has been around since the 1960s, it’s experiencing a renewed focus in recent years. While we’re still in the early days of the design and development of intelligent conversational AI, Google quite rightly announced that we were moving from a mobile-first to an AI-first world, where we expect technology to be naturally conversational, thoughtfully contextual, and evolutionarily competent. In other words, we expect technology to learn and evolve.

Most chatbots today can handle simple questions and respond with prebuilt responses based on rule-based conversation processing. For instance, if user says X, respond with Y; if user says Z, call a REST API, and so forth. However, at this juncture, we expect more from conversations. We want contextual assistants that transcend answering simple questions or sending push notifications. In this series, I’ll walk you through the design, development, and deployment of a contextual AI assistant that designs curated travel experiences.

First, let’s talk about the maturity levels of contextual assistants as explained by their capabilities:

Conversational AI Maturity Levels

Level 1 Maturity: at this level, the chatbot is essentially a traditional notification assistant; it can answer a question with a pre-built response. It can send you notifications about certain events or reminders about things in which you’ve explicitly expressed interest.

For instance, a level 1 travel bot can provide a link for you to book travel.

Level 2 Maturity: at this level, the chatbot can answer FAQs but is also capable of handling a simple follow up.

Level 3 Maturity: at this level, the contextual assistant can engage in a flexible back-and-forth with you and offer more than prebuilt answers because it knows how to respond to unexpected user utterances. The assistant also begins to understand context at this point. For instance, the travel bot will be able to walk you through a few popular destinations and make the necessary travel arrangements.

Level 4 Maturity: at this level, the contextual assistant has gotten to know you better. It remembers your preferences and can offer personalized, contextualized recommendations or “nudges” to be more proactive in its care. For instance, the assistant would proactively reach out to order you a ride after you’ve landed.

Level 5 and beyond: at this level, contextual assistants are able to monitor and manage a host of other assistants in order to run certain aspects of enterprise operations. They’d be able to run promotions on certain travel experiences, target certain customer segments more effectively based on historical trends, increase conversion rates and adoption, and so forth.

Conversational AI has its roots in NLP

Natural Language Processing (NLP) is an application of artificial intelligence that enables computers to process and understand human language. Recent advances in machine learning, and more specifically its subset, deep learning, have made it possible for computers to better understand natural language. These deep learning models can analyze large volumes of text and provide things like text summarization, language translation, context modeling, and sentiment analysis.

Natural Language Understanding (NLU) is a subset of NLP that turns natural language into structured data. NLU is able to do two things — intent classification and entity extraction.

When we read a sentence, we immediately understand the meaning or intent behind that sentence. An intent is something that a user is trying to convey or accomplish. Intent classification is a two-step process. First, we feed an NLU model with labeled data that provides the list of known intents and example sentences that correspond to those intents. Once trained, the model is able to classify a new sentence that it sees into one of the predefined intents. Entity extraction is the process of recognizing key pieces of information in a given text. Things like time, place, and name of a person all provide additional context and information related to an intent. Intent classification and entity extraction are the primary drivers of conversational AI.

For the purposes of this article, we will use the Rasa, an open source stack that provides tools to build contextual AI assistants. There are two main components in the Rasa stack that will help us build a travel assistant — Rasa NLU and Rasa core.

Rasa NLU provides intent classification and entity extraction services. Rasa core is the main framework of the stack the provides conversation or dialogue management backed by machine learning. Assuming for a second that the NLU and core components have been trained, let’s see how Rasa stack works.

Let’s use this sample dialogue:

The NLU component identifies that the user intends to engage in vacation based travel (intent classification) and that he or she is the only one going on this trip (entity extraction).

The core component is responsible for controlling the conversation flow. Based on the input from NLU, the current state of the conversation and its trained model, the core component decides on the next best course of action, which could be sending a reply back to the user or taking an action. Rasa’s ML-based dialogue management is context-aware and doesn’t rely on hard-coded rules to process conversation.

Installation and Setup

Now, let’s install Rasa and start creating the initial set of training data for our travel assistant.

Rasa can be set up in two ways. You can either install the Rasa stack using python/pip on your local machine or you can use docker to set up the Rasa stack using preconfigured docker images. We’re going to install the Rasa stack using python and pip.

If you don’t have python installed on your machine, you can use Anaconda to set it up. Note that you need python 3.6.x version to run the Rasa stack. The latest version of python (3.7.x at the time of this post) is not fully compatible.

Run the following command to install Rasa core:

pip install -U rasa_core

Install Rasa NLU by running this command:

pip install rasa_nlu[tensorflow]

Now let’s scaffold our application by cloning the starter pack provided by Rasa:

git clone https://github.com/RasaHQ/starter-pack-rasa-stack.git travel-bot

Once cloned, run these commands to install the required packages and the spaCy english language model for entity extraction.

pip install -r requirements.txt \
  && python -m spacy download en

At this point, we have everything we need to begin developing our travel assistant. Let’s take a look at the folder structure and the files that were created during the scaffolding process.

The “domain.yml” file describes the travel assistant’s domain. It specifies the list of intents, entities, slots, and response templates that the assistant understands and operates with. Let’s update the file to add an initial set of intents corresponding to our travel domain. Here’s a snippet:

intents:
  - greet
  - request_vacation
  - affirm
  - inform
...
entities:
  - location
  - people
  - startdate
  - enddate
...
slots:
  people:
    type: unfeaturized
  location:
    type: unfeaturized
...
actions:
  - utter_greet
  - utter_ask_who
  - utter_ask_where
  - utter_confirm_booking
...
templates:
  utter_ask_who:
    - text: "Fun! Let's do it. Who's going?"
  utter_ask_where:
    - text: "Perfect. Where would you like to go?"
...

The “data/nlu_data.md” file describes each intent with a set of examples that are then fed to Rasa NLU for training. Here’s a snippet:

## intent:request_vacation
- I want to go on vacation
- I want to book a trip
- Help me plan my vacation
- Can you book a trip for me?
...
## intent:inform
- just me
- we're [2](people)
- anywhere in the [amazon](location)
- [Paris](location)
- somewhere [warm](weather_attribute)
- somewhere [tropical](weather_attribute)
- going by myself
...

The “data/stories.md” file provides Rasa with sample conversations between users and the travel assistant that it can use to train its dialog management model. Here’s a snippet:

## vacation happy path 1
* request_vacation
    - utter_ask_who
* inform{"people": "1"}
    - utter_ask_where
* inform{"location": "paris"}
    - utter_ask_duration
* inform{"startdate": "2019-10-03T00:00:00", "enddate": "2019-10-13T00:00:00"}
    - utter_confirm_booking
* affirm
    - goodbye
...

Rasa provides a lot of flexibility in terms of configuring the NLU and core components. For now, we’ll use the default “nlu_config.yml” for NLU and “policies.yml” for the core model.

Run the following command to train Rasa NLU:

make train-nlu

Run the following command to train Rasa Core:

make train-core

We can now run the server to test Rasa through the command line:

make cmdline

Rasa stack provides hooks to connect our assistant to various front end channels like Slack and Facebook. Let’s configure and deploy our travel assistant to Slack.

Configure Slack

Let’s start by creating a new app in Slack.

  • Under features, go to “OAuth & Permissions”, add permission scopes like “chat:write:bot”, and save your changes
  • Go to “Bot Users”, add a bot user, and save your changes
  • Go back to “OAuth & Permissions” and install the app to your workspace
  • Copy the “Bot User OAuth Access Token” under tokens for your workspace
  • Back to the “travel-bot” folder on your local machine, create a new “credentials.yml” file. Paste the token into the file so it looks like this:
slack:
  slack_token: "xoxb-XXXXXXXXXXX"

We need to pass these credentials to Rasa. To make our lives easier, let’s update the “Makefile” under the “travel-bot” folder to add a new command called “start”

...
start:
    python -m rasa_core.run \
    --enable_api \
    -d models/current/dialogue \
    -u models/current/nlu \
    -c rest --cors "*" \
    --endpoints endpoints.yml \
    --credentials credentials.yml \
    --connector slack

Caution: Based on your requirements, update ‘cors’ settings before deploying your bot to production.

Let’s start the server by calling:

make start

Our travel assistant is now running on the local port 5005, and it’s configured to talk to Slack via REST APIs. We need to make these endpoints reachable to the outside world. Let’s use ngrok to make our server available to Slack. Ngrok helps in setting up a secure tunnel to our local server for quick development and testing. Once you have ngrok installed, run the following command in a new command-line terminal:

ngrok http 5005

Make a note of the “https” URL provided by ngrok. Back in the Slack app management UI:

  • Enable “Event Subscriptions”. In the request URL textbox, paste your ngrok URL and add “/webhooks/slack/webhook” to the end of that URL. Complete the event subscription by subscribing to bot events like “message.im” and save your changes
  • Optionally, enable “Interactive Components” and pass the same request URL that you used in the previous step. Be sure to save your changes

At this point, we have fully configured our bot assistant to interact with Slack. To test the travel assistant, slack your newly created travel bot.

What’s next?

In the next part of the series, we’ll deep dive into our NLU pipeline, custom components like Google’s BERT and Recurrent Embedding Dialogue Policy (REDP), and approach concepts like context, attention, and non-linear conversation.

Topics:
chatbot ,chatbot development ,conversational ai ,artifical intelligence ,machine learning ,deep learning ,natural language processing ,nlp

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}