Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Adding Speech Recognition Capabilities to Your NativeScript App

DZone's Guide to

Adding Speech Recognition Capabilities to Your NativeScript App

It's 2017 and speech recognition on phones and tablet finally no longer sucks. This post shows how to add speech to text capabilities to your NativeScript app.

· Mobile Zone
Free Resource

Download this comprehensive Mobile Testing Reference Guide to help prioritize which mobile devices and OSs to test against, brought to you in partnership with Sauce Labs.

Does Speech Recognition Still Suck?

It doesn't. Watch this 24 second video so you can literally take my word for it:


Wow, iOS Speech Recognition is Really Impressive!

I know, right?! The nice thing is, it works equally well on Android and neither of these requires any external SDK - it's all built into the mobile operating systems nowadays.

I'm Convinced, Let's Replace All Text Input by Speech!

Sure, knock yourself out! Add the plugin like any other and read on:

$ tns plugin add nativescript-speech-recognition

Availability Check

With the plugin installed, let's make sure the device has speech recognition capabilities before trying to use it (certain older Android devices may not):

// import the plugin
import { SpeechRecognition } from "nativescript-speech-recognition";

class SpeechRecognition {
  // instantiate the plugin
  private speechRecognition = new SpeechRecognition();

  public checkAvailability(): void {
    this.speechRecognition.available().then(
      (available: boolean) => console.log(available ? "YES!" : "NO"),
      (err: string) => console.log(err)
    );
  }
}

Starting and Stopping Listening

Now that we've made sure the device supports speech recognition, we can start listening for voice input. To help the device recognize what the user says, we need to tell it which language it can expect. By default, we expect the device language.

We also pass in a callback that gets invoked whenever the device interprets one or more spoken words.

This example builds on the previous one and shows how to start and stop listening:

// import the plugin
import { SpeechRecognition } from "nativescript-speech-recognition";

class SpeechRecognition {
  // instantiate the plugin
  private speechRecognition = new SpeechRecognition();

  public checkAvailability(): void {
    this.speechRecognition.available().then(
      (available: boolean) => console.log(available ? "YES!" : "NO"),
      (err: string) => console.log(err)
    );
  }

  public startListening(): void {
    this.speechRecognition.startListening({
      // optional, uses the device locale by default
      locale: "en-US",
      // this callback will be invoked repeatedly during recognition
      onResult: (transcription: SpeechRecognitionTranscription) => {
        console.log(`User said: ${transcription.text}`);
        console.log(`User finished?: ${transcription.finished}`);
      },
    }).then(
      (started: boolean) => { console.log(`started listening`) },
      (errorMessage: string) => { console.log(`Error: ${errorMessage}`); }
    );
  }

  public stopListening(): void {
    this.speechRecognition.stopListening().then(
      () => { console.log(`stopped listening`) },
      (errorMessage: string) => { console.log(`Stop error: ${errorMessage}`); }
    );
  }
}

iOS User Consent

On iOS, the startListening function will trigger two prompts: one to request allowing Apple to analyze voice input, and another one to requests permission to use the microphone.

The contents of these "consent popups" can be amended by adding fragments like these to app/App_Resources/iOS/Info.plist:

<!-- Speech recognition usage consent -->
<key>NSSpeechRecognitionUsageDescription</key>
<string>My custom recognition usage description. Overriding the default empty one in the plugin.</string>

<!-- Microphone usage constent -->
<key>NSMicrophoneUsageDescription</key>
<string>My custom microphone usage description. Overriding the default empty one in the plugin.</string>

Have Feedback?

As usual, compliments and marriage proposals can be added to the comments. Problems related to the plugin can go to the GitHub repository. Enjoy!

Analysts agree that a mix of emulators/simulators and real devices are necessary to optimize your mobile app testing - learn more in this white paper, brought to you in partnership with Sauce Labs.

Topics:
mobile ,mobile apps ,app development ,speech recognition ,nativescript

Published at DZone with permission of Eddy Verbruggen, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}