Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Facial Recognition and Human Fallibility

DZone's Guide to

Facial Recognition and Human Fallibility

Big Data is being channeled into facial recognition, which is seeing use in everything from airport security to policing. See where it's outracing humans.

· Big Data Zone ·
Free Resource

Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

The last year has seen a number of interesting technologies emerge to utilize facial recognition. For instance, there have been projects utilizing it in airport security and general policingResearch has even developed technology able to operate effectively in poor light conditions.

As with many new technologies that offer to better what we as humans are capable, there is a sense that we require considerably higher standards of the technology than we expect from ourselves.

A good example of our own fallibility when it comes to facial recognition comes from a recent paper. The study asked participants to examine two unfamiliar faces and tell whether they were the same or not. A seemingly straightforward task you imagine.

passport-control

Reflecting Life

To try and recreate common scenarios for such a task, the photos were placed in a photographic ID card. The results suggest that our ability to spot the fakes is actually pretty poor.

Indeed, people tended to err significantly towards a belief that the two people in the photos were the same person, causing them to miss out on numerous instances where that wasn’t the case. So, in essence, they were letting someone through the passport control using another person's passport.

This concept was further examined in a second experiment whereby participants were asked again to evaluate faces on a passport, but also to examine the data on the passport at the same time to see if they could spot errors. For instance, the gender of the name might not match the gender of the photo, or the date of birth may be significantly out.

As before, humans appeared to be really bad at this, with both the facial recognition task and the error spotting task proving beyond many of the participants. Indeed, just 20% of the data based errors were caught by participants when they also had to examine the photos at the same time. This was especially so when the photo was a match but the data not.

It’s well known that we struggle with tasks when they have multiple demands, but this provides further evidence that machines need not be entirely perfect to improve significantly on what we humans are capable of.

Spotting Criminals

Further grist for the mill in this debate comes via a recent study that explored whether machine learning could be used to detect criminals by nothing more than their faces.

The researchers trained their algorithm on 1856 different images of Chinese men, with roughly half of these known criminals. The algorithm was trained on 90% of these images, with the remaining 10% then used to test how effective it was.

It emerged that the algorithm was actually rather good, and was able to correctly spot the criminal and non-criminal 89.5% of the time.

“These highly consistent results are evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic,” the authors say.

Now here’s the thing. Bizarre as this sounds, there is actually evidence to suggest this is purely logical. Indeed, past studies have found that humans are capable of spotting crooks from their faces, too.

As you might imagine, however, the devil with these kinds of developments is very much in the details. I mentioned at the outset that the Chinese police services are using facial recognition technology in vehicles to spot criminals as patrol cars are roaming the streets. It isn’t that much of a stretch to extend such technologies to spot people with ‘criminal faces’, whether or not they’ve actually conducted a crime (yet).

It raises as many questions as it answers, but it's certainly rather interesting. Let me know your thoughts on this in the comments.

Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub.  Join the discussion.

Topics:
facial recognition ,machine learning ,big data

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}