When Algorithmic Bias Turns Deadly
Self-driving car manufacturers have yet another safety hurdle to overcome.
Join the DZone community and get the full member experience.
Join For FreePhoto credit Flickr/smoothgroover22
Machine-learning models are notoriously susceptible to algorithmic bias, particularly when it comes to people of color. Just a few years back, software used by the US criminal justice system was shown to disproportionately suggest black people were more likely to commit crimes. Then there was the time that Google’s image-recognition system identified African Americans as gorillas.
But now this could be an even bigger problem than originally thought. A new report suggests that the object-detection models used by self-driving cars are significantly less capable of identifying dark-skinned pedestrians than those with lighter complexions. The culprit? Algorithmic bias stemming from underrepresenting people of color in training data sets.
While the study is certainly no smoking gun for opponents of this controversial technology – it has yet to be peer reviewed, and it used an approximated model and data set rather than one currently used by self-driving vehicle manufacturers – it still brings to light an important possibility.
“In an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers. But given those are never made available (a problem in itself), papers like these offer strong insights into very real risks,” tweeted Kate Crawford, a co-director of the AI Now Research Institute.
For more info on the study as well as a discussion of possible solutions, check out this piece in Vox.
Opinions expressed by DZone contributors are their own.
Comments