Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Quick Intro of Actions on Google Home

DZone's Guide to

Quick Intro of Actions on Google Home

As Google Home is becoming increasingly available across the world, understanding Google Actions is becoming increasingly important.

· IoT Zone
Free Resource

Discover why Bluetooth mesh is the next evolution of IoT solutions. Download the mesh overview.

Google Home will finally be available in Germany on August 8 and in France this week. I’m not aware of more announcements for other countries, but I hope and assume that availability will increase to many more countries as soon as possible. For me, though, getting my AIY kit was the day, I started getting interested in developing with Actions on Google.

Image title

Google’s AIY kit

Different Types of Interfaces

"Conversational interfaces" is a very broad term. It covers all kind of chats whether voice is used up to pure voice interfaces like those used in Google Home.

Actions on Google supports text based interfaces and — depending on the capabilities of the devices — a limited set of visual feedback and touchable actions. I will cover those differences and how to detect which capabilities the device in question has, in later posts. On mobile, the text can be entered either by keyboard or by voice. With Google Home, it obviously can only be entered by speaking to the device.

BTW, you can expect the assistant to appear in other devices, as well. Be it IoT devices, cars, or anything else where a voice interface can be useful. As you have seen in the first picture of this post, Google’s AIY kit itself uses Actions on Google (or can be made to use it). How to achieve this is also the topic of an upcoming post.

Two SDKs for the Google Assistant

When it comes to the Google Assistant, there are two very different offerings by Google:

  • The Assistant SDK 
  • Actions on Google

Assistant SDK

With the Assistant SDK, you can enable devices to embed the Google Assistant. This means that it allows you to add the Google Assistant to a device made by you. It also allows you to change the way the Assistant is triggered on your device — for example, you can use a button press instead of the “OK, Google” phrase.

The SDK is gRPC-based, which is a protocol buffer based message exchange protocol. It has tons of bindings for a plethora of languages. As a sample (and for practical use as well), complete Python bindings for certain Linux-based architectures already exist.

If you are creating devices and want to integrate the Assistant into those, then the Assistant SDK is the SDK of your choice. The AIY kit, shown in the picture above, is running the Assistant SDK on a Raspberry Pi. I will get into this SDK in a follow-up post.

Actions on Google

With Actions on Google, you can create apps for the Google Assistant. The remainder of this post is all about this option.

Two Options to Use Actions on Google

When developing apps for the Google Assistant, there exist two options:

  • Use the Actions SDK directly
  • Use a service on top of the Actions SDK

Google gives you a recommendation as to when to use which option in its Build an app in 30 minutes guide.

Using the Actions SDK

The Actions SDK allows you to directly access the recognized user text and to deal with it in your backend. It is suited for either very simple projects with clear commands or if you are sufficiently proficient in natural language processing.

Using Api.ai or Other Services on Top of the Actions SDK

Most often, using a service is the better option. It’s not that the Actions SDK itself is particularly complex. The problem lies more in how to detect what the user intends with his response and how to parse the text to get relevant data. This is where those services shine. You enter some sample responses by the user and the services then not only understands these sentences but many more that resemble those sentences but use different wording, a different word order or a combination of both. And they extract the data you need in an easily accessible format. Consider understanding dates — which is not even the most complex example. You have to understand “next week,” a specific date given, abbreviations, omissions, and much more. That’s the real value of these services.

One such service is api.ai, which was bought by Google last fall. As such, it’s only natural that this service supports Actions on Google quite nicely. In addition to this, you can use api.ai for other platforms like Alexa, Cortana, Facebook Messenger, and much more. I will cover api.ai thoroughly in future posts.

You are not limited to api.ai, though. One contender is converse.ai, which I haven’t had the opportunity to test yet. The visual design of converse.ai’s conversation flow has some appeal but whether it’s practical and overall as good as api.ai, I cannot tell. But hopefully, I will be able to evaluate it while continuing with my Actions on Google posts.

Let’s Put Things Into Perspective

Even though conversational interfaces seem to be all the rage lately, they are not really new.

Actually, they are quite old. Eliza was programmed by Joseph Weizenbaum in the sixties and created quite a stir back then. You can try it out on dozens of websites for yourself.

My first experience was the fictional interface shown in the film Wargames, 1983:

Image title

A screenshot of a Wargames scene (1983)

And of course, there was Clippy in the late nineties, the worst assistant ever:

Microsoft's Clippy

Microsoft’s Clippy

So if they are not new, why then are they all the rage? Luckily, we have progressed from there on and nowadays, we have all kind of chatbots integrated into messengers and other communication tools, we have website assistants that pop up if we ponder for a while on a particular page and we have true voice only interfaces like Amazon’s Alexa and Google Home.

And those are powered by a much better understanding of human language, of the intents of the user and how to find and combine important entities of the user’s spoken text.

The Google Assistant works for voice only devices (like Google Home) or with some visual add-ons on phones or other devices with a touchscreen.

Take a deep dive into Bluetooth mesh. Read the tech overview and discover new IoT innovations.

Topics:
iot ,google assistant ,tutorial

Published at DZone with permission of Wolfram Rittmeyer, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}