{{announcement.body}}
{{announcement.title}}

Conversational AI: Design and Build a Contextual Assistant (Part 2)

DZone 's Guide to

Conversational AI: Design and Build a Contextual Assistant (Part 2)

In this post, we’ll look at structuring happy and unhappy conversation paths, and various machine learning policies and configurations to improve your dialogue model.

· AI Zone ·
Free Resource

In the first part of this series, we introduced the different maturity levels of conversational AI and started building a travel assistant using Rasa. In this post, we’ll look at structuring happy and unhappy conversation paths, various machine learning policies and configurations to improve your dialogue model, and use a transfer learning-based language model to generate natural conversations.

What Can You Do, Coop?

Since the primary purpose of the assistant, let’s name it Coop, is to book awesome vacations, Coop requires key pieces of information from the user in order to do so. For the purpose of this article, let’s assume Coop only needs the number of people, the holiday destination, and the start and end dates of said vacation.

In the next iteration, we want to extract more information from the user with regards to their interests, budget, age, itinerary restrictions, and anything else we need to create curated travel experiences for them. Armed with this initial set of knowledge, let’s take a look at how we can enable Coop to gather this information through a natural conversation with the user.

Hold Information With Slots

Rasa uses slots to hold user-provided information, among other things. Slots, which are essentially key-value pairs, can be used to influence conversations with a user. The value of a slot can be set in several ways — through NLU, interactive cards, and actions. Rasa defines this as slot filling. Slots are defined in the “domain.yml” file. Each slot is given a name, type, and an optional initial value. Here’s a snippet from Coop’s domain.yml file:

...
slots:
  enddate:
    type: unfeaturized
  location:
    type: unfeaturized
  num_people:
    type: unfeaturized
  startdate:
    type: unfeaturized
...

Note that we set the type as “unfeaturized” for each slot as we don’t want the slot values to influence our conversation flow.

Slot Filling With Forms

In order to perform actions on behalf of the user, like we’re trying to do with Coop, we need to fill multiple consecutive slots or key pieces of information. We can do with using FormAction. A FormAction is essentially a python class. It takes a list of slots that need to be filled or questions that need to be answered and asks the user for said information in order to fill each slot. Note that the FormAction only asks the user for information to fill slots that are not already set.

Let’s take a look at a happy path. A happy path is where the contextual assistant is able to gather the information it needs from the user without interruption. In other words, the user answers the questions without deviating from their path as shown below.

a happy pathIn order to enable the FormAction mechanism, you want to add the “FormPolicy” to your config file typically named “config.yml”:

...
policies:
  - name: FormPolicy
...

Next, let’s define our custom form class:

class BookingForm(FormAction):
    def name(self):
        # type: () -> Text
        """Unique identifier of the form"""
        return "booking_form"

    @staticmethod
    def required_slots(tracker: Tracker) -> List[Text]:
        """A list of required slots that the form has to fill"""
        return ["num_people", "location", "startdate", "enddate"]

    def submit(self,
               dispatcher: CollectingDispatcher,
               tracker: Tracker,
               domain: Dict[Text, Any]) -> List[Dict]:
        """Define what the form has to do
            after all required slots are filled"""
        dispatcher.utter_template('utter_send_email', tracker)
        return []

As can be seen from the snippet above, our custom form class should define three methods — name, required_slots, and submit. The methods are self-explanatory.

Now let’s tell our model to invoke the booking form action. Open your stories.md file and add one or more happy path stories:

...
* request_vacation
    - booking_form
    - form{"name": "booking_form"}
    - form{"name": null}
...

Slots and Entities: The Best Type of Relationship

We’ve defined several interesting slots in Coop. The “location” slot is used to hold information about the user’s vacation destination. We’ve also defined an entity with the same name. Rasa NLU will fill the “location” slot when it identifies the “location” entity in the user’s message.

In order for it to be able to do so, we need to train the NLU to extract the location entity. While this is pretty exciting, training the NLU model to identity generic entities like location is a time-consuming process that requires a lot of data.

This is where the out-of-the-box “SpacyEntityExtractor” component of the Rasa NLU pipeline comes to our rescue. This pre-trained component is a named-entity recognizer that identifies various common entities (called dimensions) like person, organization, cities, and states.

Let’s take a look at how we can hook into this component to fill our location slot. We begin by adding “SpacyEntityExtractor” component to our NLU pipeline. Edit the “config.yml” file.

language: en
pipeline:
...
- name: "SpacyEntityExtractor"
  dimensions: ["GPE", "LOC"]

Rasa offers a method called “slot_mappings” in the FormAction class that can be used to further configure the way slots are filled. In our case, we can use this method to ensure that the “location” slot gets filled in the following order:

  1. Use the “location” entity identified by our NLU model

  2. If step 1 fails, use the “GPE” dimension identified by “SpacyEntityExtractor”

  3. If step 2 fails, use the “LOC” dimension identified by “SpacyEntityExtractor”

def slot_mappings(self):
        # type: () -> Dict[Text: Union[Dict, List[Dict]]]
        """A dictionary to map required slots to
            - an extracted entity
            - intent: value pairs
            - a whole message
            or a list of them, where a first match will be picked"""
    return {"location": [self.from_entity(entity="location"),
                         self.from_entity(entity="GPE"),
                         self.from_entity(entity="LOC")]}

You can read more about the predefined functions here.

The other interesting slots in Coop’s domain are “startdate” and “enddate”. As the names suggest, these slots represent the user’s choice for their vacation start and end dates. Instead of training our NLU model to identify and extract this data and potentially solve for entity disambiguation along the way, we can use the “DucklingHTTPExtractor” component. This pre-trained component is a named-entity recognizer that identifies various common entities like time, distance, and numbers. Similar to how we configured the “SpacyEntityExtractor”, the “DucklingHTTPExtractor” component should be added to our NLU pipeline. Edit the “config.yml” file.

language: en
pipeline:
...
- name: "DucklingHTTPExtractor"
  url: http://localhost:8000
  dimensions: ["time", "number", "amount-of-money", "distance"]

As seen from the config above, the “DucklingHTTPExtractor” is expected to be running at the specified host and port. You can use Docker to run the duckling service.

Note that the FormAction class allows us to define custom validations that can be used to validate user-provided information. For example, we want to ensure that the start date is earlier than the end date. These validation methods should be named based on a convention. If you have a slot named “enddate”, you want to define a method named “validate_enddate” for it to be called by Rasa.

def validate_enddate(self,
                     value: Text,
                     dispatcher: CollectingDispatcher,
                     tracker: Tracker,
                     domain: Dict[Text, Any]) -> Optional[Text]:
    """Ensure that the start date is before the end date."""
    try:
        startdate = tracker.get_slot("startdate")
        startdate_obj = dateutil.parser.parse(startdate)
        enddate_obj = dateutil.parser.parse(value)
        if startdate_obj < enddate_obj:
            return value
        else:
            dispatcher.utter_template('utter_invalid_date', tracker)
            # validation failed, set slot to None
            return None
    except:
        print("log error")
        return None

write conversation that’s relaxed and informal

Unhappy Paths Are the Rule, Not the Exception

FormActions are really useful to gather information from a user and perform actions on behalf of a user. But, as you know, user behavior can be unpredictable, and conversation is messy. There are 30,000 different ways you can ask about the weather; one of my favorite ways is when people say, “Is it going to be raining cats and dogs today?” Users rarely provide the information required without digressing, engaging in chitchat, changing their minds, correcting their answers, asking follow up questions, and so forth — all of which are valid, expected, and need to be handled if you’re building powerful conversational AI.

These deviations are known as unhappy paths. I highly recommend using interactive learning through the CLI to train your model to handle unhappy paths.

Here’s the command to run Rasa in interactive mode:

rasa interactive --endpoints endpoints.yml

Once you’re done, you can save the training data and retrain your models. Here’s an example of an unhappy path story.

* request_vacation
    - booking_form
    - form{"name": "booking_form"}
    - slot{"requested_slot": "num_people"}
...
* form: inform{"location": "paris", "GPE": "Paris"}
    - slot{"location": "paris"}
    - form: booking_form
    - slot{"location": "paris"}
    - slot{"requested_slot": "startdate"}
* correct{"num_people": "2"}
    - slot{"num_people": "2"}
    - action_correct
    - booking_form
    - slot{"requested_slot": "startdate"}
* form: inform{"time": "2019-07-04T00:00:00.000-07:00"}
...

Rasa provides multiple policies that can be used to configure and train its Core dialogue management system. The “Embedding” policy, also known as Recurrent Embedding Dialogue Policy (REDP), can be used to efficiently handle unhappy paths. In addition, it provides hyperparameters that can be used to fine-tune the model. You can read more about REDP here.

I used the embedding policy for Coop.

policies:
  - name: EmbeddingPolicy
    epochs: 2000
    attn_shift_range: 5
...

Now, let’s take a look at an unhappy path that involves correction and explanation. A correction is when a user issues a revision to their previous answer or statement. For instance, if they got their phone number wrong and wished to correct it. A user issues an explanation when they want to know the reason behind the assistant’s questions.

powerful conversational AI that can handle unpredictable behaviorIn the example above, notice how Coop guides the conversation back to the topic at hand. The dialogue model learns to handle corrections and provides an explanation for why it needs certain information, essentially bringing the user back onto a happy path.

NLG Is a Super Power

There are two significant areas of natural language processing (NLP) that come into play with conversational AI. First, there’s the aspect of trying to understand what the user says; what is the user’s intent? Second, there’s the aspect of generation and responding to the user in a way that’s natural and conversational. The ultimate goal of natural language generation (NLG) is to teach models to turn structured data into natural language, which we can then use to respond to the user in a conversation.

Admittedly, you can create personas and write excellent conversation to make your assistant sound naturally conversational. But that may require writing a lot of stories and rules. While rules are great, stable, and predictable, they require a lot of engineering and can be hard to scale and maintain. They also lack the spontaneity and creativity that you find in human conversation.

By training a large-scale unsupervised language model with data and examples of the language it needs to generate, the model eventually forms its own rules about what it’s supposed to do and has more free rein to be creative. I tweaked an existing transfer learning-based language model to generate small talk and chitchat. With more examples and data, this model can generate natural language to summarize text and answer questions without any specific task training.

NLG aka computer generated chitchatIn the example above, notice how Coop continually guides the user back onto a happy path when they engage in chitchat. The dialogue models learn to handle narrow and broad context and ignore superfluous information.

But C Is for Conversation

Writing good conversation is critical to conversational AI. Before you launch your contextual assistant to the outside world, you want to invest in writing clear and succinct copy that has the right tone of voice, relevant and contextual vernacular, and persona that resonates with your audience, in addition to having highly performant NLG. Good conversation can delight users, build brand loyalty, and ensure high user retention.

You also want to think about providing menu buttons and quick replies for the user to tap and trigger certain events in an effort to minimize user input. They’re a great way to suggest options and nudge the user onto a happy path.

Writing copy for conversational AI is something that deserves focused attention and is beyond the scope of this article. Read more about writing bot copy here. Coop doesn’t have a lot of conversation or UI options at the moment, but as we gather real data, we’ll attempt to understand user behavior, further tweak our custom NLG model as it interacts with real users, focus on writing good copy, and make continuous improvements over the next several iterations.

What’s Next?

In the final part of this series, we’ll talk about various testing strategies that can be used to test and evaluate our models. We’ll also talk about deploying Coop to production, after which we’ll monitor and make continuous improvements.

Topics:
artificial inteligence ,chatbot ,chatbot development ,natural language processing ,natural language generation ,natural language understanding ,machine learning ,machine learning and ai ,deep learning ,conversational ai

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}