Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Is Predictive Policing as Biased as We Fear?

DZone's Guide to

Is Predictive Policing as Biased as We Fear?

Fears persist that AI algorithms hard-code the various racial biases that already exist in policing today. Researchers are working with LAPD to examine this.

· AI Zone ·
Free Resource

EdgeVerve’s Business Applications built on AI platform Infosys Nia™ enables your enterprise to manage specific business areas and make the move from a deterministic to cognitive approach.

AI is increasingly being used to guide and inform policing, whether it's to predict recidivism rates or guide police forces on the most effective ways to utilize their resources. Fears persist, however, that these algorithms hard code the various racial biases that already exist in policing today.

It's a fear that a recent study from UCLA and Louisiana State University suggests is overblown. The study, which is believed to be the first to use real-time field data from a deployment of predictive policing in Los Angeles, found that no such increase in biased arrests occurred.

"Predictive policing is still a fairly new field. There have been several field trials of predictive policing where the crime rate reduction was measured, but there have been no empirical field trials to date looking at whether these algorithms, when deployed, target certain racial groups more than others and lead to biased stops or arrests," the authors say.

Predicting Bias

The researchers worked with LAPD to conduct a randomized control trial whereby a human analyst would instruct one set of officers to patrol particular beats on a day, while an algorithm would do likewise on other days. Whether the human or machine would do the deciding was randomly allocated each day, with the arrest rate data then analyzed to see if any differences emerged, especially in terms of the ethnic groups picked up by the officers on patrol but also the location of arrests to see if ethnic minority neighborhoods were being targeted more or less.

"When we looked at the data, the differences in arrest rates by ethnic group between predictive policing and standard patrol practices were not statistically significant," the authors say.

The team trawled through data from both district level and within the LAPD officers' patrol areas, with no statistically significant difference emerging in the arrest rates in either. The arrest rates were higher across the board in the areas selected by the algorithm, however, suggesting that the algorithm was better at allocating officers to areas with higher crime rates.

"The higher crime rate, and proportionally higher arrest rate, is what you would expect since the algorithm is designed to identify areas with high crime rates, " the researchers explain.

Suffice to say, we are still at an early stage in the deployment of predictive policing algorithms, so we should be wary of reading too much into these findings. Indeed, the authors themselves confess that more work needs to be done, and lessons learned from each deployment of the technology. This study is a welcome addition to that canon, however.

Adopting a digital strategy is just the beginning. For enterprise-wide digital transformation to truly take effect, you need an infrastructure that’s #BuiltOnAI. Click here to learn more.

Topics:
ai ,bias ,ethics ,predictive analytics ,algorithm

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}