Replacing Clunky Data Dashboards With Chatbots

DZone 's Guide to

Replacing Clunky Data Dashboards With Chatbots

How can we replace dashboards with chatbots?

· AI Zone ·
Free Resource

Data dashboards have been a useful addition to the enterprise toolkit, enabling all stakeholders to adhere to data-driven decision making. However, they have a few drawbacks:

  • They require people to log in to access data and are not always remotely accessible
  • There is time and effort involved in finding the right data in the dashboard. There's quite a bit of technical know-how required to fully leverage these dashboards, and a definite onboarding period to get everyone comfortable with using them.

And owing to all that, most enterprise stakeholders simply stop using the dashboards altogether.

Now you've got a mass of well-structured data just going to waste because it's complicated to access.

One of our clients in the manufacturing sector was facing the same set of challenges, and we proposed that they replace their data dashboards with chatbots to access their data.

The chatbot was designed to deliver the following benefits:

Fast and Easily Understandable Information

The chatbot worked on an “asked-and-answered” approach. This eliminated the need to log onto a dashboard and filter and manipulate the controls to find the data you need. The client can simply ask a query and the bot will analyze all necessary data to give a clear answer.

For example, by asking the chatbot, “How well are Customer X’s machines performing?” the client exec could get an idea of:

  • All machines assigned to customer A in any region
  • Performance metrics for each machine
  • A quick readout of which machines, in which locations, will soon need servicing

Here’s a look at what kinds of information the bot can deliver:

  • Equipment performance metrics
  • Equipment health
  • Savings potential
  • Which equipment to use in which scenario

Simplified Access to Information

The chatbot was made available via mobile apps and could be used anytime, anywhere to access the information required.

Because the bot is easy to use and does not require people to look at and analyze a lot of data, it has seen increased adoption, both by the client and their customers.

How We Did It 

The chatbots were made available as web and iOS applications and built entirely using AWS products. This was primarily because the client was already using AWS Cloud for hosting services and it made sense to build new solutions within the same ecosystem.

The Solution Architecture

Chatbot with AWS solutions

Here’s how the complete chatbot workflow was designed to operate:

  • The user query input in the chabot hit the API Gateway
  • This is passed onto AWS Lambda, which identifies the location where the query originates to identify the language
  • AWS Translate converts the query to English and passes it on to Amazon Lex
  • Lex understands the user query and identifies the intent or the data that the user wants
  • This is passed onto AWS Lambda, which queries the PostgreSQL database to find the right answer
  • The final response is again translated back to the original language by AWS Translate
  • This is passed onto Lex, which finally delivers it to the user on the app

AWS Solutions at Play

Amazon Lex

Amazon Lex is a conversational interface framework that’s used to design a chatbot’s understanding of natural questions asked and has the ability to carry forward a conversation. Its deep learning functionalities are what enables chatbots to identify the intent behind a particular question, understand the context, and give back an appropriate response.

The basic elements involved in designing the flow of interaction are utterances, intents, and slots. For this particular project, here’s how these elements were defined:

  • Utterances: These were sample questions that a user could possibly ask the chatbots. This data, along with data on several variations of every question was fed into Lex. This training data helped Lex understand the range of possible natural language questions. Additionally, deep learning capabilities allowed Lex to extrapolate from the given variations of a question and expand its vocabulary to understand newer variations of a question.
  • Intent: Lex was trained to strip down each utterance to a basic intent. This is the exact information or piece of data that the user wants to know. This key intent is passed forward and processed to extract the relevant answer.
  • Slots: These were values that help qualify a particular utterance or intent. They were necessary for questions that need additional data to be answered correctly.

For example, if the question is, “How much of a particular equipment is switched on,” the chatbot needs to ask follow-up questions on the region, site, time frame, etc. All of these — region, site, and date range  — can be defined as slot values for the “equipment switched on” intent.

Training data around slot values and possible permutation and combinations of slot values were also fed into Lex. This allowed the chatbot to accurately qualify questions and also spot new slot values when they occur.

Amazon Cognito

Amazon Cognito is a solution to securely manage user sign-up, sign-in, and access controls within an application. For the client, Srijan used Amazon Cognito and AWS Identity and Access Manager (IAM) for creating users, storing user data, creating user groups, and creating access control based on authorization.

AWS Lambda

AWS Lambda is a serverless computing platform that can run code in response to event triggers. It also automatically manages all the resources required to manage and scale your code.

For this project, the user intent and slot values gathered by Lex were the triggers for Lambda. Based on these, Lambda would activate to generate an appropriate answer. The responses can be:

Static: Simple introductory messages or questions that can be answered without querying the database.

Business-logic based: All questions around asset performance are answered by querying the database, extracting the answer, re-formatting it into a user-understandable format, and passing it to the user via Lex.

For the client, all asset performance and associated data was stored in PostgreSQL. The business logic defined atop Lambda governed how the raw data from the database is computed and interpreted. And that forms the basis of how Lambda generates an answer that’s relevant to the user’s question rather than give just bare data points.

For example, for a particular intent titled “equipment performance,” the business logic is to define three KPI values — high performance, good enough, and switched off — with each value spanning a given range of a performance metric. To answer this query, Lambda will pull this metric for all equipment and then sort them into the KPIs as per the defined business logic. And that’s the answer that the user receives: X machines are high performance, Y machines are good enough, and Z machines are switched off.

The business logic for all intents was defined in close consultation with the client depending on their operational needs.

AWS Translate

AWS Translate is a deep learning-powered neural translation service. Given that the client’s equipment and stakeholders were spread across different geographies, Translate was leveraged to make multilingual operations possible.

Translate worked with AWS Lambda to identify the geography where the utterance originated and lock on the utterance language. This was then translated into English before being passed on to Amazon Lex.

Similarly, once the answer was generated by Lambda in English, it was passed through AWS Translate to convert it back to the original language.

Amazon S3

Amazon S3 is a secure and scalable object storage service. Static equipment images and other graphs and charts generated to accompany chatbot answers were stored on Amazon S3. It was set up to communicate only with Amazon Lambda and is not accessible by other applications or by direct queries.

In cases where the user wanted to view a particular equipment model or other visual data, AWS Lambda pulled up the relevant images from S3 as a temporary URL. This was then passed onto Lex, which displayed the image to the user.

All of this together formed a chatbot that brought asset performance data to the fingertips for our client execs.

How about you? Are you working on similar chatbot projects for enterprises? We would love to know in the comments if you approach certain challenges differently and about the tolls, platforms, and solutions you use to build chatbots.

amazon lex, artificial intelligence, aws, aws lambda, chatbot analytics, chatbot apps

Published at DZone with permission of Gaurav Mishra . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}