Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Smart Interviews: AI-Powered Recruitment

DZone's Guide to

Smart Interviews: AI-Powered Recruitment

Learn how to create a robot that can conduct interviews for you and send back a report on how the candidate did using sentiment analysis and speech recognition.

· AI Zone ·
Free Resource

Start coding something amazing with the IBM library of open source AI code patterns.  Content provided by IBM.

Many times, as a recruiter, you'll find both active and passive candidates who fulfill a job description. Some problems arise when the candidate has to go to the office to conduct an interview with the recruiter. The main issues are related to distance and transportation. Also, if we are dealing with a passive candidate who already has a job, conducting an interview is almost impossible. If you can reach them, you need to provide some facilities to proceed with the interview. We provide a solution architecture based on an interactive chatbot. As a recruiter, the chatbot begins with a welcome message using the voice of the Google Translate service and the candidate interacts with the chatbot by voice. Once the interview is finished, an email with a report including the contact data, a sentiment analysis process, a summary, and the transcription is sent to the human recruiter. This system can be developed and deployed allow candidates to take the interview anytime and anywhere.

Architecture Description

In the next figure, we provide a high-level functional description overview.

Smart Interview Architecture

The idea is to automate the candidate interview process using artificial intelligence technologies. First, we develop a virtual assistant, as shown in the figure above, which is a conversational agent capable of orchestrating the whole interview process. Then, we use different machine learning techniques to build and train models which will be capable of executing the various steps in the figure above. Following is a description of each step in the process, as illustrated above:

  1. The virtual assistant presents the candidate with a web form for capturing name and contact information.

  2. The virtual assistant starts the chat, welcomes candidate to the interview, and proceeds with interview questions via voice and text.

  3. The responses received from the candidate are analyzed to detect the predominant feeling for every response and to generate a bar chart for inclusion within the final interview report.

  4. The responses are also used to build a summary of the interview to be included in the final report.

  5. Upon finishing the interview, the virtual assistant generates the report and sends it via email to the human hiring manager.

The next figure shows the technical description of our system. Next sections provide some creative ways to use and develop the several components we’ve implemented.

Smart Interview Technical Description

We have designed a front-end for the user and a formatted report to be presented as a prototype to the recruiter, as the next figure shows. The next sections correspond with each of the services developed.

Smart Interview Prototype

Speech-to-Text

For speech to text, we decided to use the Web Speech API, which makes it easy to add speech recognition to your web pages. This API allows satisfactory control and flexibility for Chrome version 25 and later.

Bot Engine (NLP)

For the virtual assistant engine, we decide to use api.ai/DialogFlow to configure all our intents, entities, and so on. Before beginning, you should check out the weather virtual agent tutorial here.  Once you are familiar with concepts like intents, entities, and webhooks, you can design your own bot engine but just with text capabilities. What can we do to provide speech capabilities to our bot? The idea is to use a webhook to fulfill the request and use a cloud function to manipulate each of the intent messages. We are going to explain this in the next section.

Text-to-Speech

We can create a simple Node.js application using the Google's TTS API to convert text to audio and create a response that sends it to DialogFlow and then to the user front-end. The response is sent by a JSON file.

/* HTTP Cloud Function.
 @param {Object} req Cloud Function request context.
 @param {Object} res Cloud Function response context.
*/

exports.yourFunctionName = function yourFunctionName (req, res) {
  const googleTTS = require('google-tts-api');
  var intent = req.body.result.action; 
  var resp;

  function tts(text){  
  res.setHeader('Content-Type', 'application/json'); //Requires application/json MIME type  
    res.setHeader('Access-Control-Allow-Origin', '*');  
  googleTTS(text, 'es', 1)   // speed normal = 1 (default), slow = 0.24, language ‘es’ 
    .then(function (url) {
      res.send(JSON.stringify({ "data": {"facebook": { "attachment": url.replace("https","http") }}, "speech": text}));
})
.catch(function (err) {
    res.send(JSON.stringify({"speech": err.stack}));  
  });
  return;
  }

  if(intent==="yourActionName1"){
    resp = "your message goes here";
    tts(resp);
  }

  if(intent==="yourActionName2"){
    resp = "your other message goes here";
    tts(resp);  
  } …
}

Once you've implemented your cloud function, you need to deploy it in order to consume it in DialogFlow. We decide to deploy over Google Cloud platform using Firebase as follows:

First, we need to get the Firebase SDK using the following code into a shell. 

npm install -g firebase-tools

Execute the following command to authenticate into Firebase using your Google account linked to Google Cloud.

firebase login

Then, go to your project directory and execute the following command.

firebase init functions

Once all is completed, you'll have a project directory similar to this:

Image title

Copy the previous Node.js function to the file index.js and deploy your function using:

firebase deploy --only functions

If all is going right, then an URL will be provided, similar to:

Function URL (yourFunctionName): https://us-central1-YOUR_PROJECT.cloudfunctions.net/yourFunctionName

More information can be found here. Sometimes, you may need to add these variables to your index.js:

// The Cloud Functions for Firebase SDK to create Cloud Functions and setup triggers.
const functions = require('firebase-functions');

// The Firebase Admin SDK to access the Firebase Realtime Database.
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);

It's important to note that you need a Google Cloud project configured in order to use this code.

Sentiment Analysis

For sentiment analysis, we decide to use Google's NLP API. Here's a step-by-step guide on how to use the API.

Text Summarizer

We designed a summarizer in Python and exposed the service using Django. For the Python summarizer, we used the Python summy library:

# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division, print_function, unicode_literals

from sumy.parsers.html import HtmlParser
from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.summarizers.lsa import LsaSummarizer as Summarizer
from sumy.nlp.stemmers import Stemmer
from sumy.utils import get_stop_words

import codecs
codecs.register(lambda name: codecs.lookup('latin-1'))

LANGUAGE = "spanish"
SENTENCES_COUNT = 4
def sumariza(entrada): 
entrada = entrada.replace(',',' ')
try:
parser = PlaintextParser.from_string(entrada, Tokenizer(LANGUAGE))
stemmer = Stemmer(LANGUAGE)
summarizer = Summarizer(stemmer)
summarizer.stop_words = get_stop_words(LANGUAGE)
resumen = ''
for sentence in summarizer(parser.document, SENTENCES_COUNT):
resumen+=str(sentence)
return {'data':resumen}
except:
return {'data': 'Error en la entrada de datos proporcionada..:'}

Report Generation

For report generation, we decided to use a JavaScript library for PDF generation called BytescoutPDF.js. It is a simple library that allows you to configure the report as you desire from the client side.

Email Generation

For sending the email, two server-side libraries are required: nodemailer.js and body-parser.js. The first library is used to configure the SMTP transport layer and send the email. The second library is used to manage attachments and POST requests. 

var express = require('express');
var nodemailer = require("nodemailer");
var port = 3700;
var app = express();
var bodyParser = require("body-parser");
/*
    Here we are configuring our SMTP Server details.
    STMP is mail server which is responsible for sending and receiving email.
*/
var smtpTransport = nodemailer.createTransport({
    service: "gmail",
    host: "smtp.gmail.com",
    auth: {
        user: "yourUserName@gmail.com",
        pass: "yourPasswordHere"
    }
});
/*------------------SMTP Over-----------------------------*/
/*------------------Routing Started ------------------------*/
app.post('/send',function(req,res){
    var mailOptions={
        to : req.body.to,
        subject : req.body.subject,
        text : req.body.text,
        attachments : req.body.attachments
    }
    console.log(mailOptions);
    smtpTransport.sendMail(mailOptions, function(error, response){
    if(error){
        console.log(error);
        res.end("error");
     } else {
        console.log("Message sent: " + response.message);
        res.end("sent");
     }
});
});
app.listen(port);

And that’s it! You just need to connect all components in the middleware in order to manage each task.

Start coding something amazing with the IBM library of open source AI code patterns.  Content provided by IBM.

Topics:
ai ,virtual assistant ,machine learning ,sentiment analysis ,speech recognition ,nlp ,middleware ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}