Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Making AI Facial Recognition Less Racist

DZone's Guide to

Making AI Facial Recognition Less Racist

See how a study proposes a better way of accurately detecting faces without introducing inherent biases.

· AI Zone ·
Free Resource

Bias comes in a variety of forms, all of them potentially damaging to the efficacy of your ML algorithm. Read how Alegion's Chief Data Scientist discusses the source of most headlines about AI failures here.

AI has famously been rather poor at recognizing faces in a non-racist way. The size of the challenge was highlighted by recent work from MIT and Stanford University, which found that three commercially available facial-analysis programs displayed considerable biases against both gender and skin-types.

Image title

For instance, the programs were nearly always accurate in determining the gender of light-skinned men but had an error rate of over 34 percent when it came to darker-skinned women.

The findings cast additional doubt on how AI systems are trained and how accurate their suggestions actually are. For instance, the developers of one system that was analyzed claimed accuracy rates of 97 percent, but when the training data was examined, 77 percent of the faces were male, and 83 percent of them were white.

A Better Job

A new study from researchers at MIT CSAIL proposes a better way of accurately detecting faces without introducing inherent biases. The study proposes resampling data autonomously to make it more balanced. The algorithm at the heart of the system can learn both from specific tasks, such as facial recognition, and the very structure of the underlying data. The team believes this enables it to detect any hidden biases and rule them out.

When the system was tested against current state-of-the-art systems, it managed to reduce categorical bias by around 60 percent, while maintaining the overall precision of the system. This was despite the system being trained on largely the same dataset as that of the research from last year.

The system is interesting, as it was able to do this completely autonomously, whereas many of the methods used today require at least a small amount of human input to ensure biases are reduced.

"Facial classification, in particular, is a technology that's often seen as 'solved,' even as it's become clear that the datasets being used often aren't properly vetted," the researchers explain "Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement, and other domains."

Amini says that the team's system would be particularly relevant for larger datasets that are too big to vet manually and also extends to other computer vision applications beyond facial detection.

Your machine learning project needs enormous amounts of training data to get to a production-ready confidence level. Get a checklist approach to assembling the combination of technology, workforce and project management skills you’ll need to prepare your own training data.

Topics:
bias ,ai ,discrimination ,facial recognition ,detecting faces with ai

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}