How to Build a Pizza-Delivering Bot With Mule, Slack, API.AI, and NLP in Minutes
How to Build a Pizza-Delivering Bot With Mule, Slack, API.AI, and NLP in Minutes
Learn how to use natural language processing and completely free software to build a fully functional conversational bot that can deliver you pizza!
Join the DZone community and get the full member experience.Join For Free
Is it possible to create a fully functional conversational bot with natural language processing (NLP) in minutes?
Good news — it is!
In this post, we will show you how to start from the ground up, giving you everything you need to create your own basic conversational bot using 100% free accounts and software (yes, all this is all powered for free).
The free accounts and software shown in this post have some limitations; this post is designed for you to understand the basics of:
- Natural language processing APIs from API.AI.
- Integration with Slack using the Real-Time Messaging API (RTM).
- Creating conversation intents and entities in API.AI.
- Create a simple software integration in the backend using Anypoint Studio and Anypoint Runtime Manager.
This guide should take you no more than 30 minutes to complete — that is, if you have some previous experience with Anypoint Platform. If you don't, then it should take you less than an hour — which is nothing when you consider what we're about to accomplish.
Let's begin with what we need before we start:
- Anypoint Studio latest version (at the time of this post it was 6.2.4).
- Anypoint Platform trial account.
- API.AI account, and the Agent's Client Access Token.
- Slack bot and its access token. To create a bot and get the access token, follow Slack's guide, which is a simple walkthrough.
Once we have the Anypoint Platform and API.AI accounts created, Anypoint Studio downloaded and installed, and the Slack connector installed in Anypoint Studio, we can proceed with getting the corresponding authorization tokens we will need along our integration (we do this now to keep them close, so we don't interrupt the flow of this guide). Save these in a text file or in a temporary text editor to later copy and paste them in the places we will need them.
API.AI's Client Access Token is really easy to get, but first, we need to create an Agent to talk to.
An Agent is this bot or "person" we will be talking with through Slack, so we need a way to authenticate with API.AI's API (ok, that's a lot of I's), and let them know we are a trusted client, that's why we need the Client Access Token.
- Once you are logged in to API.AI, we will proceed to create a sample Agent.
- To do this, just click on the Create Agent button on the main screen or click on your agent's name on the left sidebar and then click Create New Agent.
- Give your agent a name, a description, a language, and time zone. In my case, I also selected some sample data to initialize this new agent with so we make this process faster, we can later edit this sample data; for this guide, I chose the PizzaDelivery sample data.
- After the Agent is created, press the gear icon next to the Agent's name on the top left sidebar and copy the long alphanumeric string into our temp text file so we can use it later.
For the Slack part, we need to first create the bot in our team and then get the bot's access token.
If you don't have a Slack team or belong to one you can test bots with, then you can create a Slack team here. It's easy, and you can start using Slack in just a few steps. Then, follow this guide to create your bot.
If you belong to a Slack team already and want to create a bot there, please follow this guide to create one.
Once you have your Slack bot created, let's proceed to get the bot's access token, which will allow us to identify ourselves as the backend integration of that bot and enable to receive and send messages in and out of Slack.
- Go to this link and log in with your Slack credentials, then select Bots from the list.
- Then, under Configurations, edit the bot configuration by selecting the pencil icon to the right.
- Copy the API token alphanumeric string into our temp text file so we can use it later.
The Backbone Integration
Now that we have our bot created in Slack and our Agent created in API.AI, you start getting the picture of the roles of each system. And what is the missing piece that connects them all? That's right, it's MuleSoft's Anypoint Platform.
I am going to provide you with a complete Mule project so we can focus on the topics we are talking about here, but you are free to investigate and modify the project as you wish.
The Project Structure
The project is divided into a main flow and "n" flows per the intents of our bot. An intent represents a mapping between what a user says and what action should be taken by your software. Read more about API.AI intents here.
That means that if I want my bot to understand, process, and answer about "employee information" and also questions about the "weather," then I should create one flow to process and search for employee information and another to get the weather status, and so on and so forth.
The Main Flow
There are some things we need to do between receiving and replying to a message from a user on Slack. Please see the image below, and each block's explanation (the block numbers correspond to the list numbers after the image):
- Receive the message and capture what we need.
- Session management. API.AI needs to know who is talking to. The way it does this is by talking to a "session number." We create and destroy these numbers, and the session lives until the bot has all the information it needs to fulfill the user's query.We check if the user has a session alive. And then:
- If the incoming message does not belong to a user with a session, then we create a session for this user.
- If the incoming message does belongs to a user with a session, then we retrieve this session number using the user's channel ID as the key of the
- Interact with API.AI's API:
- We prepare the structure of the query we have to send to API.AI's API for interpretation.
- We use the HTTP connector to send the payload to the API.
- We receive and capture all the parameters we need from API.AI's API response.
- Decide if we should process the user's query, if we should keep asking for more information, or if we're just returning a callback. For example, if I tell the bot "list the flights of SFO airport" and the bot has no idea what to do with that, then the bot should say something like "I did not understand your question." That callback would flow through the "default" path of the choice because it did not match the following criteria:
- A valid intent I have a flow for.
- The action is complete, meaning that I don't need to ask anything else to the user in order to process the intent. If these two are present and "true," then the choice will route to the corresponding intent's flow. In this case, if I have everything I need to process the order, then the choice routes to the fulfill order path, calls the
flowReferenceprocessor, and executes the
fulfill_purchaseflow, which finalizes the user's session. (I don't need to continue the conversation; I just need to reply one last time with the final result of the intent's fulfillment.) These two criteria items are represented by the following MEL expression in the choice:
#[flowVars.apiAiResponse.intent == "order.pizza" && flowVars.apiAiResponse.incomplete == false]
Now that we know how our project works, we can configure the tokens we've been so carefully saving: the API.AI Agent's Client Access Token and Slack Bot's access token.
- In Anypoint Studio, go to the left-side project navigator and double click on the file
- I have preset what you need to configure those two tokens. Paste the tokens respectively. Pay attention to the "Bearer" prefix that API.AI's token needs. Paste the token after that "Bearer" string. It should look something like this:api.ai.clientToken=Bearer ab7d87bSA89&BBD&A8bdBD&A8bdB
- Make sure you save the file.
That's all you need to configure! we are ready to test this project.
Test the Project
- On the left side of the Project Explorer, right-click the project's name and click on Run As > Mule Application.
- Wait for the project to start.
- Go to Slack and open a direct message conversation with your bot. (Basically, search your bot as if it were a person.)
- The bot's status should be "active," with a green circle indicating this. Say "Hello" to your bot! I mean, for real, type "Hello" and press enter. The bot should say "Hi" back to you.
- You should be ready to order your first pizza! Say, "I want a pizza" to your Bot, and follow the conversation along, replying all the information needed to fulfill your pizza order until you receive something like "All set! A delicious small Margherita pizza is on its way to 456 Incredible Street. Stay hungry!"
Deploy the Project to Anypoint Runtime Manager
You might want your Bot to be available 24×7, 365 days of the year — come on, he's not going to complain. For this reason, we are going to deploy our project to Anypoint Runtime Manager from Anypoint Studio.
- If your project is running, stop it.
- On the left side of the Project Explorer, right-click the project's name and click on Anypoint Platform > Deploy to cloud.
- This will open a login window. Log in with your Anypoint Platform trial account credentials.
- After you log in, configure the deployment. Choose a name for your worker and a worker size (0.1 vCores is enough for this low traffic integration).
- After the deployment is configured, click Deploy Application at the bottom.
After ~5-7 minutes, your worker should be up and running and you will be able to re-test your bot in Slack. This time, our project is a sophisticated cross-cloud integration application running in Anypoint Runtime Manager.
This was created and tested with:
- Mule Runtime v3.8.3
- Anypoint Studio v6.2.4
- Slack Connector v3.0.0
Published at DZone with permission of Fernando Melone , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.