Over a million developers have joined DZone.

Conversational Interfaces Powered by DialogFlow (Api.ai)

DZone 's Guide to

Conversational Interfaces Powered by DialogFlow (Api.ai)

Conversational analytics is quickly becoming a primary need for businesses that want to get to know their audience and provide a better UX. Learn how to use Api.ai to meet this need.

· AI Zone ·
Free Resource

We’re seeing a revolutionary shift in the way people interact with brands and devices: conversation. With the rise of wearable devices and the Internet of Things, businesses are following their audience to provide conversational experiences through chatbots and voice-enabled applications alike. Conversational analytics is quickly becoming a primary need for businesses that want to get to know their audience and provide a better user experience.

Tools Available

  1. DialogFlow (Api.ai)
  2. Wit.ai 
  3. Amazon lex 
  4. IBM Watson 

...and some more are on the list. too.

Today, we are going to discuss DialogFlow (formally known as Api.ai). 


DialogFlow uses intents, entities, actions with parameters, contexts, speech to text, and text to speech capabilities, along with machine learning that works silently and trains your model. DialogFlow has built-in knowledge on topics like casual talks, weather, and wisdom. It means we don’t have to train the agent for these intents. DialogFlow returns the output as JSON data.


Agents are best described as NLU (natural language understanding) modules. These can be included in your app, product, or service and transform natural user requests into actionable data.

This transformation occurs when a user input matches one of the intents inside your agent. Intents are the predefined or developer-defined components of agents that process a user’s request.

Agents can also be designed to manage a conversation flow in a specific way. This can be done with the help of contexts, intent priorities, slot filling, responsibilities, and fulfillment via webhook.

Image title

Choose a unique name for your agent and set the relevant settings.

  • Default language: Select the default language for your agent.
  • Default time zone: Select the default time zone for your agent.
  • Google project: Choose an existing Google Cloud Project or leave on Create a new Google project to do so.
  • API version: Enable this option to use the beta V2 API.  
  • Client access token: For end-user interaction with /query/context and /userEntites endpoints only.
  • Developer access token: Used for agent modification. Do not share!


An intent represents a mapping between what a user says and what action should be taken by your software.

Intent interfaces have the following sections:

  • User says
  • Action
  • Response
  • Contexts

User says consists of the below screenshot:

Image title

Each User says expression can be in one of two modes: Example Mode (indicated by the " icon) or Template Mode (indicated by the @ icon).

Examples are written in natural language and annotated so that parameter values can be extracted. You can read more about annotation below.

Templates contain direct references to entities instead of annotations, i.e. entity names are prefixed with the @ sign.

Annotation is a process (and also the result of such process) of linking a word (or phrase) to an entity.

Automatic Annotation

When you add examples to the User says section, they are annotated automatically. The system detects the correspondence between words (or phrases) and existing developer and system entities and highlights such words and phrases. It also automatically assigns a parameter name to each detected entity.

Image title


The action name is defined manually. It will be the trigger word for your app to perform a particular action.


This is just a response to what user says. You can improve your agent's eloquence by adding several variations of the text response per intent. When the same intent has been triggered more than once, different text response variations will be unrepeatable until all options have been used. It'll help make your agent speech more human-like.

Image title


Contexts are designed for passing on information from previous conversations or external sources (i.e. user profile, device information, etc). Also, they can be used to manage conversation flow.

Input contexts serve as a prerequisite for the intent to be matched, i.e. the intent will participate in matching only when all the contexts in the input context field are active.

Fallback Intents

Fallback intents are triggered if a user's input is not matched by any of the regular intents or enabled built-in small talk.

When you create a new agent, a default fallback intent is created automatically. You can modify or delete it if you wish.

Image title


Entities are powerful tools used for extracting parameter values from natural language inputs. Any important data you want to get from a user's request will have a corresponding entity.

There are three types of entities: system (defined by Dialogflow), developer (defined by a developer), and user (built for each individual end-user in every request). 



Setting up a webhook allows you to pass information from a matched intent into a web service and get a result from it.



Authentication can be done in two ways:

  • Basic authentication with login and password.
  • Authentication with additional authentication headers.

If the integrated service doesn’t require any authentication, leave the authentication fields blank.

The service must use HTTPS and the URL must be publicly accessible.

ai ,dialogflow ,nlp ,tutorial ,conversational app ,api.ai ,nlu

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}