Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Watson NLU-Based Q+A System for Online TV Guide

DZone's Guide to

Watson NLU-Based Q+A System for Online TV Guide

With this solution, you don't need to do any work to find out information like name, time, date, and channel for your favorite TV shows — the AI will do it for you!

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

Note: Previous experience on Python development a bonus.

This recipe shows how to use Knowledge Studio to create a machine learning annotator model and use it to extract entities on questions about a TV guide service.

Requirements

Step-by-Step Instructions

Let's go through all the steps to build our project!

Create Project

It is assumed that you have an IBM Bluemix account. Sign in to your account and select Catalog and search Understanding. The service that we are interested in is highlighted below:

Image title

Select the Natural Language Understanding service and create a new instance.

View Credentials

You will need to click on the Service Credentials and View Credentials link to get the details that we need to populate the Python script.

Image title

Configure Machine Learning Annotator in IBM Knowledge Studio

As defined in the Requirements section, it is assumed that you have a Knowledge Studio account. Sign in to your account and launch Knowledge Studio.

Image title

Add Entities Type

The first step is to click Type System > Add Entity Type. We defined ten entity types; four entity types with prefix Req_ refer to four questions types (time, channel, date, and TV program).

Image title

Import Documents for Annotation

The next step is to click Documents > Import Document Set. These documents contain questions about the TV programs.

Image title

Following is an example of the document's contents:

Image title

Create Annotation Set

The system needs a set of human annotators for identifying all entities for each document containing questions. You must create an annotation set and assign a human annotator, finally, you will need to add annotation tasks to track the work of human annotators in Watson Knowledge Studio. For more information, please visit the Watson Knowledge Studio documentation.

Annotating Documents

Open the Test annotation task you created above and select each document containing questions. We have annotated all TV show names with the Program entity type and information about time, date, and channel with Tiempo, Fecha, and Canal respectively. Finally, the information request about channel, time, date, and program has been annotated with Req_Tiempo, Req_Fecha, and Req_Canal entity types respectively.

Image title

Create a Machine Learning Annotator Model

We trained and evaluated the machine learning annotator model with 70% and 23% of the questions respectively, and 7% as blind set.

Image title

Deploy the Machine Learning Annotator Model

Go to Machine Learning Annotator Model Details Deploy > Natural Language Understanding.

Image title

Click on the NLU link to get the ID model that we need to populate the Python script (nlu.py).

Image title

Back-End on Python

We separate the back-end code in three Python scripts (nlu.py, tvguide.pyRespond.py). nlu.py and tvguide.py are interfaces for Watson NLU and TV Guide services, respectively, and Respond.py is the service for questions and answers regarding TV programs and mashes the information given by the nlu.py and tvguide.py Python scripts.

Image title

Use NLU Machine Learning Model

We created an interface to the Natural Language Understanding API using Python script (nlu.py) and imported the class NaturalLanguageUnderstandingV1 from watson_developer_cloud .

As seen below within the nlu.py script, in the function text_analisis, we added username, password, and version variables with credential information and model variable with ID model. This function clears Entity information and returns a list of dictionaries with Entity names and values.

Image title

Use Online TV Guide Service

We created an interface (tvguide.py) to get information about TV programs. We used an API REST service thatgives information (name, time, date, and channel) about each TV show in the TV guide.

The function BuscarProgramas in the tvguide.py script receives keywords about TV shows — for example, barcelona and noticias — and returns a data structure with information about TV shows with their corresponding schedules.

Generate Responses

The response is generated by a Python script (Respond.py). This script sends the question given by the user to the Watson NLU and receives a data structure with detected entity types. The information of entities is sent to the TV schedule service as keywords, and in returns receives a data structure with information about TV shows with their corresponding schedules.

The response is a concatenation between the TV show information and a predesigned script. The result is the sentence; for example, #T.Vshow saldrá el día #date a las #time por el canal #channel”.

Image title

And that's it!

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
cognitive computing ,ai ,machine learning ,tutorial ,knowledge studio ,python ,watson ,nlu

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}