Some faces are so iconic that we could probably recognize them with amazingly few pixels. In fact Salvador Dali used this to advantage in one of his paintings.
If you're not familiar with the painting, just squint and you should recognize the face. If you can't there's a hint down by the woman's feet. (For fun here's some more pixelated faces site 1, site 2.)
Except for the poor souls that suffer from Prosopagnosia (face blindness) humans can generally discriminate between many thousands of different faces and can easily identify many hundreds of different faces. Computer algorithms, as well as people, are also very good at doing facial verification. "What is the likelihood that this is an image of person X?" In TV police and detective stories people are routinely asked to perform this task in the morgue, or in a police lineup, or just looking at mugshots. How big of a pool of faces is the witness using for the comparison? If you looked at 1 million mugshots could you then be presented with random people and determine if they were one of the mugshots are not?
There's a new machine learning challenge called the MegaFace Challenge that uses a one million face corpus. The idea is to extend the accuracy of our current facial recognition systems. One of the more recent and popular benchmarks, Labeled Faces in the Wild (LFW) which was created in 2007, only involves about 13,000 faces. Many of the current facial recognition systems are able to achieve 90% plus accuracy on this data set.
Ira Kemelmacher-Shlizerman, an assistant professor of CS at the University of Washington in Seattle, is the principal investigator behind Megaface. The main concept behind the project is that "algorithms should be evaluated at large scale,” says Kemelmacher-Shlizerman “and we make a number of discoveries that are only possible when evaluating at scale. ...The big disadvantage is that [the facial recognition field] is saturated, i.e. there are many, many algorithms that perform above 95 percent on LFW, ...This gives the impression that face recognition is solved and working perfectly.”
However, when presented with the Megaface million face data set, the accuracy drops dramatically with the best algorithms dropping into the 70% range and the worst algorithms dropping far down to 30%. Another reason to move up to this larger data set is that it rules out a lot of the hand tuning that goes into the existing LFW benchmark. The high success of many of the facial recognition systems often reflects a lot of hand tweaking to proprietary algorithms. Clearly, that sort of tweaking will not scale.
The MegaFace Challenge tests both extremes of the facial recognition task. One part is verification where the algorithm is expected to correctly determine if two pictures represent the same person or two different people. The other part of the test is to be given one image of a person and to identify that person from a pool of 1 million facial images (note: the pool of images does not contain the exact input image, only an image of the same person). A paper giving a full analysis of the results was presented on June 30, 2016, at the IEEE Conference on Computer Vision and Pattern Recognition.
Aside from the issues that you might expect to interfere with recognition (e.g. lighting, head orientation, expressions, etc.) the paper include some insights into facial comparisons of the same individual at slight and even moderately large age discrepancies. Also, all of the algorithms had more trouble with the faces of children. This is most likely because the subset of the data corpus for children was a small percentage of the training set for the algorithms tested.
I've spent a large part of my career working with speech recognition technology and until recently recognizing the speech of children has not worked well. In fact, in the early days (late 1960s, 1970s) speech recognition only worked well for middle-aged, Caucasian, academic, Midwestern (TV newscaster) accents. Most of the researchers fit this profile and most of the time they personally created their own corpus. A decidedly biased sample which has subsequently been corrected with the large-scale collection of utterances by groups such as Google, Nuance, AT&T, and others.
Because a large number of these facial recognition systems are commercial and proprietary it is difficult to merge and refine the underlying algorithms. Even the training data that these groups use are usually not shared. Kemelmacher-Shlizerman stated “It's a huge problem because open research and competition cannot be done if researchers cannot train their algorithm on similar data as some companies. There is no opportunity to come up with better techniques.”
But there is some hope since the researchers at the University of Washington have already created a loose federation of more than 300 research groups. As this project grows we can expect to see ongoing results from the MegaFace Challenge. Stay tuned!
Someday soon facial recognition trackers will be as ubiquitous as the automobile license plate reader/recognizers/trackers are today. (Maybe I should buy some of those Groucho Marx glasses for my travels?)