Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

When Algorithmic Bias Turns Deadly

DZone 's Guide to

When Algorithmic Bias Turns Deadly

Self-driving car manufacturers have yet another safety hurdle to overcome.

· AI Zone ·
Free Resource

Google Self-Driving Car

Photo credit Flickr/smoothgroover22

Machine-learning models are notoriously susceptible to algorithmic bias, particularly when it comes to people of color. Just a few years back, software used by the US criminal justice system was shown to disproportionately suggest black people were more likely to commit crimes. Then there was the time that Google’s image-recognition system identified African Americans as gorillas.

But now this could be an even bigger problem than originally thought. A new report suggests that the object-detection models used by self-driving cars are significantly less capable of identifying dark-skinned pedestrians than those with lighter complexions. The culprit? Algorithmic bias stemming from underrepresenting people of color in training data sets.

While the study is certainly no smoking gun for opponents of this controversial technology – it has yet to be peer reviewed, and it used an approximated model and data set rather than one currently used by self-driving vehicle manufacturers – it still brings to light an important possibility.

“In an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers. But given those are never made available (a problem in itself), papers like these offer strong insights into very real risks,” tweeted Kate Crawford, a co-director of the AI Now Research Institute.

For more info on the study as well as a discussion of possible solutions, check out this piece in Vox.

Topics:
self driving cars ,bias in ai ,machine learning models ,data sets

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}