Making AI Facial Recognition Less Racist
See how a study proposes a better way of accurately detecting faces without introducing inherent biases.
Join the DZone community and get the full member experience.Join For Free
AI has famously been rather poor at recognizing faces in a non-racist way. The size of the challenge was highlighted by recent work from MIT and Stanford University, which found that three commercially available facial-analysis programs displayed considerable biases against both gender and skin-types.
For instance, the programs were nearly always accurate in determining the gender of light-skinned men but had an error rate of over 34 percent when it came to darker-skinned women.
The findings cast additional doubt on how AI systems are trained and how accurate their suggestions actually are. For instance, the developers of one system that was analyzed claimed accuracy rates of 97 percent, but when the training data was examined, 77 percent of the faces were male, and 83 percent of them were white.
A Better Job
A new study from researchers at MIT CSAIL proposes a better way of accurately detecting faces without introducing inherent biases. The study proposes resampling data autonomously to make it more balanced. The algorithm at the heart of the system can learn both from specific tasks, such as facial recognition, and the very structure of the underlying data. The team believes this enables it to detect any hidden biases and rule them out.
When the system was tested against current state-of-the-art systems, it managed to reduce categorical bias by around 60 percent, while maintaining the overall precision of the system. This was despite the system being trained on largely the same dataset as that of the research from last year.
The system is interesting, as it was able to do this completely autonomously, whereas many of the methods used today require at least a small amount of human input to ensure biases are reduced.
"Facial classification, in particular, is a technology that's often seen as 'solved,' even as it's become clear that the datasets being used often aren't properly vetted," the researchers explain "Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement, and other domains."
Amini says that the team's system would be particularly relevant for larger datasets that are too big to vet manually and also extends to other computer vision applications beyond facial detection.
Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.