Facial Verification: Robust or Hackable?
Facial Verification: Robust or Hackable?
Using your face to unlock your devices and access your accounts seems like a great idea. That is until you find out how hackable it is.
Join the DZone community and get the full member experience.Join For Free
Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.
Facial detection is everywhere. I'm pretty sure every camera on every modern phone will automatically find faces and outlines them in the viewfinder for you. And, of course, social sites like Facebook are "helping" us by recognizing and identifying the faces of our friends that we share. The camera viewfinder help is not particularly risky even if it does occasionally classify a teddy bear or a Cabbage Patch doll is a human face. And while the Facebook (DeepFace) "help" can seem creepy and intrusive it is usually not dangerous (unless some of your friends have FBI Most Wanted posters in their feed and the algorithm starts labeling you as a criminal).
But after detection and then recognition, the natural progression leads to verification. And at first glance it seems like a really great idea. In fact, I'm sure some of you reading this already have some kind of "trusted face" software installed on your phone. Now, before you get worried, this is probably not very risky. In order to fool the algorithm, a hacker would have to intercept the link between the camera and the data bus in order to insert a fake image. And in order to do that the hacker would already have hacked into the phone with the level of access that would allow them to do whatever they wanted anyway: game over.
But what about services that want to use facial verification where the image is transmitted to a remote server (e.g. your bank or your doctor's office). Then it becomes much easier to spoof the input image. But, on the other hand, because the server side has more computer power and can use more sophisticated algorithms (e.g. Joint Bayesian, TransferLearning, DeepID, etc.) to confirm the veracity of the input. For instance, some of these more sophisticated algorithms could take advantage of subtle shifts. Subtle shifts in perspective should not greatly affect the efficiency of the algorithm to verify a face, but if there were no shifts between several frames then the spoofed input could be detected as a still image, and therefore rejected. Even more sophisticated algorithms are being developed in the lab which observed small facial movements. These algorithms look for motion of the lips, eyebrows, eyelids, etc., in combination with the expected small head tilts.
This is where some basic video game character rendering skills come into play. Researchers at the University of North Carolina have developed a virtual reality-based approach to modeling a face and all its subtle movements well enough to fool these algorithms. The paper “Virtual U: Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos” takes advantage of techniques using multiple still photos of the person's face to create a 3-D model and then using a good quality front facing picture as a texture map to stretch over that 3-D model. They do it like this (one of the presentation slides from the presentation):
Once you have the 3-D model and the key facial landmarks it becomes straightforward to introduce natural small stochastic movements of the whole head as well as individual landmarks. It's not much harder to have routines that generate smiles, frowns, grimaces, or puzzlement using the landmarks.
One thing the group discovered was that the better the picture the more successful they were at spoofing the verification algorithm. In fact, if they had one high-quality photograph taken indoors with good lighting they could fool most of the algorithms they tested nearly all the time. But, if they used the lower quality pictures which are usually available from things such as selfies posted on social sites the accuracy went down into the 80% range. (For more details about this see the presentation slides).
Sadly, the people most vulnerable to this kind of hack are the people with the most and best quality pictures and videos online. In addition, those people are the most likely targets of such hacks. So, most vulnerable individuals mixed with most interesting targets creates a perfect storm. It seems very unlikely that facial verification will be a secure access method for anyone with multiple pictures on the Internet.
Wait, that's just about everybody, isn't it?
Opinions expressed by DZone contributors are their own.