Lie Detection: The Truth Will Set You Free... or to Jail?

DZone 's Guide to

Lie Detection: The Truth Will Set You Free... or to Jail?

How computers can use machine learning to determine whether human statements are lies or truths.

· Big Data Zone ·
Free Resource

We all know people who believe that they can tell if you're lying just by watching and listening. And most of us believe that our mothers were amazingly good at catching us in a lie. Some researchers at the University of Michigan developed software that does a better job than humans when it comes to judging the veracity of witnesses on the stand. Their software system quantified a number of speech patterns as well as expressions and gestures (all of which are fairly easy to extract with existing software today) and used machine learning to correlate them with the "truth". The truth for the purposes of this study was considered to be the final judgment in the case.

It turns out that the real world part of this data set was critical. Actual lying (at least the kind we think of as important to detect) has a malicious component or a self-preservation component that is different from the kind of lying that is conjured up in typical psychology experiments.

Note: The psychology experiment scenario usually involves some sort of abstract game and one player is instructed to lie to another player. But the experiment never quite captures the visceral personal risk that is apparent in the courtroom. I know this from personal experience since I used to earn extra cash in college volunteering for paid psychology experiments. In one such experiment I had to tell the "truth" about the relative length of some lines projected on the screen in front of an audience of about 30 people. The audience was polled by a "raise of hands" about which was longest and if we all got it right we got a little extra money. After a few images and votes other people in the audience started picking the wrong length. After a few more images everyone (except me) started picking the same "wrong" length. At that point (supposedly because I was costing them the extra cash) the audience began challenging and castigating and eventually doing standing threat displays. Obviously I was the subject of this study and all the others were actors. I read the paper that came out for this study years later and the researchers thought they had measured something about "truth telling" and "public pressure". But, once I had figured out the game I realized that I could earn a half a day's pay if I withstood 15 minutes of verbal abuse. I'm sure I would've reasoned differently if telling the "truth" meant going to jail ... :-)

After training, the computer was able to identify truthfulness 75% of the time, whereas humans performed at around 50%, which is as good as it coin toss! Some of the parameters such as closing eyes or hand gestures showed significant differences when a witness was being deceptive.

The plans are to integrate these behavioral parameters with other physiologic parameters that can be gathered noninvasively using an infrared video camera. There are a number of working systems out there that can automatically determine heart rates, respiration rate, facial and body temperature fluctuations (even the hint of a facial flush that is undetectable by people can be clearly seen and classified with infrared video, and very successful work has been done with standard video by simply stretching the color and intensity ranges of the regular video output).

And we all thought that simple facial recognition or license plate cams were a violation of privacy. The beat goes on.

It won't be long before there will be commercially available software packages available for your laptop to continuously monitor the person on the other side of a Skype call or Google hang out. 

ai, image analysis, machine learning, speech

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}