DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. What AI Can Teach Us About Stereotypes

What AI Can Teach Us About Stereotypes

This research underlines the potential for artificial intelligence to provide us with greater insight into what biases exist in society.

Adi Gaskell user avatar by
Adi Gaskell
·
May. 16, 18 · Analysis
Like (1)
Save
Tweet
Share
4.07K Views

Join the DZone community and get the full member experience.

Join For Free

One of the main concerns with AI technologies today is the fear that they will propagate the various biases we already have in society. A recent Stanford study turned things around, however, and highlighted how AI can also turn the mirror onto society and shed light on the biases that exist within it.

The study utilized word embeddings to map relationships and associations between words and, through that measure, the changes in gender and ethnic stereotypes over the last century in the United States. The algorithms were fed text from a huge canon of books, newspapers, and other texts, while comparing these with official census demographic data and societal changes, such as the women's movement.

"Word embeddings can be used as a microscope to study historical changes in stereotypes in our society," the authors say. "Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic and sociological analyses of biases."

Dissecting Society

The researchers used embedding to single out specific occupations and adjectives that tended to be biased toward women or ethnic groups each decade from 1900 to the present day. These embeddings were trained on newspaper articles while also tapping into the work of fellow Stanford researchers who had developed embeddings trained on large text datasets, such as the American books contained on Google Books.

The biases located by the embeddings were then compared to the demographic changes identified in each official census undertaken during the period.

The analysis found a clear shift in how gender was portrayed throughout the 20th century, with things generally changing for the better during that time.

For instance, adjectives such as "intelligent" and "logical" would more often be associated with men in the first half of the 20th century, but this gap narrowed considerably (although it still remains) as we came closer to the present day.

There was also a shift in attitude towards Asians and Asian Americans. In the early part of the 20th century, words like "barbaric" and "cruel" were commonly used adjectives used to describe people with Asian surnames, but towards the end of the century, huge progress had been made. By the 1990s, the most common adjectives were "passive" and "sensitive."

"The starkness of the change in stereotypes stood out to me," the authors say. "When you study history, you learn about propaganda campaigns and these outdated views of foreign groups. But how much the literature produced at the time reflected those stereotypes was hard to appreciate."

The work underlines the potential for AI to provide us with greater insight into what biases exist in society, although it is still some way off being able to detect the biases inherent in its own working. Hopefully, that will be something for future research.

AI

Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • DevOps vs Agile: Which Approach Will Win the Battle for Efficiency?
  • HTTP vs Messaging for Microservices Communications
  • Tracking Software Architecture Decisions
  • Rust vs Go: Which Is Better?

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: