AI and Its Discriminating Algorithms
Check out how AI could be discriminating against certain groups of people and how you should try to understand these algorithms before relying on them.
Join the DZone community and get the full member experience.Join For Free
Artificial Intelligence has taken the industry by storm. It is spreading gems of highly advanced technological traces and with its simple touch, transforming the face of the tech world. As it paves its way into capturing diverse industries, it influences the latest trends and stirring complexities that eventually puts immense pressure on marketers, developers, and creative artists.
However, due to some shocking updates about AI algorithms that have surfaced the industry, many conflicting opinions and judgments spurred among the tech giants. The algorithms of Artificial Intelligence are reportedly being noticed to create racist and biased discrimination.
AI-driven algorithms influence our lives in uncountable manners. From providing technology that indicates you of troubles, lurking down near you when driving, to recognizing your voice and attending your every need, without you having the need to even move a finger. Due to its vast approaches and unsurmountable benefits, it assists designers to create incredible creative designs and developers for producing devices that can act like a human brain. Today, AI bots are used as a great source of customer support to composing legitimate articles for great sites like Wikipedia.
When we throw light on the wide array of advantages and contribution Artificial Intelligence is rendering to the world and its people, it’s hard to believe that it can produce such serious discrimination as well. On many major accounts, AI-driven bots were seen to spur discriminating acts. Learn about them in detail below:
Hiring of Applicants
Facial recognition software has now become a topic of concern as it is spreading gender decimation. As per the research presented by IT study, the system performs less efficiently when it comes to recognizing faces of people with darker skin tones. Meanwhile, Amazon too discards the AI recruitment system upon coming across the fact that it was rating candidates in a gender-neutral manner rather than acknowledging their expertise. As these issues started creating the buzz in the tech industry, many leading experts put forward their observations and research on a similar issue. Experts like Cathy O’Neil and Virginia Eubanks launches their bestselling books Weapons of Math Destruction and Automating Inequality respectively that boomed the industry with incredible findings.
PredPol is used in several US states — a system that identifies places with higher crime rates was put to action to reduce human bias. However, in 2016, as per the findings presented by the Human Rights Data Analysis Group, "the software could make the police to unfairly charge certain neighborhoods." What happened was, when researchers tried to find drug offenses through PredPod, it continuously sent police to a certain neighborhood having racial minorities instead of leading them to the right spot with the true crime rate.
Adding another supporting research to justify the point, the 2012 study presented by the Institute of Electrical and Electronic Engineers (IEEE) stated that as part of the police surveillance operations the cameras that work on facial recognition only suspected criminals up to 5-10%, which showed least accurate results when identifying African Americans. This great research clearly shows the huge discrimination for the innocent black citizens.
Robo-Debt Recovery Program
The fresh case of Centrelink’s debt recovery software threw light to many serious accusations for AI being the contributor to human bias. For more than 18 months, people were incorrectly targeted. Over 20,000 welfare recipients were charged with debts who were later found to owe little to no money at all. Due to some serious flaws found in the system such a huge issue got triggered.
Poorly Written Algorithms Spread Human Bias
A couple of incidents that took place lately highlighted the truth that AI algorithms are poorly written. Facebook may not discriminates its 1.3 billion members when they sign up; however, in February 2018, Facebook allowed marketers to upload targeted ads based on the criteria as to which kind of people can view those ads including the difference in race, sexual orientation, religion, and ethnicity.
Moreover, on another account, the news got supported when Microsoft Tay, the AI-chatbots that was programmed to learn stuff, tweeted by people for imitating human conversations. Things went wrong when the Tay turned into "a foul-mouthed Hitler-loving bot" because of the many malicious trolls.
Furthermore, back in 2015, a study indicates that the Google images search for CEO shows only 11% of women whereas the right percentage for the chief executive posts was found to be 27% females in the US. After a few more days, a new study displayed that male applicant acquired all the high paying jobs. Anupam Datta presented this study at the Carnegie Mellon University in Pittsburgh. According to Datta, the Google algorithms determine men as the most suited applicants for all high paying jobs and leading positions, learning from the user's behaviors.
The Failure of 2016 Beauty Contest Judged by AI-Bots
In March 2016, a mega beauty contest was organized where over 600,000 men and women belonging to India and Africa participated. The participants were asked to send their selfies and the three robots were supposed to judge the event. The bots were programmed to compare the facial features of actors and models. Despite the fact that over 50,000 eateries were collected when the results were announced, it was observed that nearly 44 winners were white. Examining the situation, researchers of Youth Lab stated that the bots rejected images of dark-skinned people considering poor lighting effects. They were not fed with the algorithms to judge darker skin tones.
Delve deeper into understanding the algorithms before planning to blindly rely on Machine Learning systems. It could prove to be really beneficial, and at the same time be a cause for spreading human bias.
Opinions expressed by DZone contributors are their own.