Over a million developers have joined DZone.

Using AI to Spot Hate Speech

DZone's Guide to

Using AI to Spot Hate Speech

Researchers have developed a system that can easily be integrated into social media to automatically detect potential incidences of antisocial behavior.

· Big Data Zone ·
Free Resource

The Architect’s Guide to Big Data Application Performance. Get the Guide.

A few years ago, researchers at the University of Sussex experimented with the use of AI and social media data to detect hate crime. The university joined a consortium with London’s police force, the Demos think tank, and software companies CASM Consulting and Palantir on the project.

“There is a vast amount of data on social media sites and in police crime databases. Much of this data comes in the form of natural language, and without the help of text analysis technology it’s difficult to translate this into useful insights,” the team said at the time.

It’s an approach that was adopted by a team from the University of Eastern Finland in a recent study that saw them develop a machine learning model to detect antisocial behavior in a piece of text.

Spotting Antisocial Behavior

The approach uses natural language processing-based techniques to help combat antisocial behavior detected in written communication.

The researchers developed a system that can easily be integrated into social media and other online communities to allow staff to automatically detect potential incidences of antisocial behavior with a high degree of accuracy.

Detecting antisocial behavior is challenging, as defining what exactly counts as antisocial behavior can often vary significantly between different environments. The team monitored large quantities of written text to determine the linguistic features that typically represent such language, including its emotional features.

The emotion involved in antisocial texts is a key tool in the project. The team developed a novel resource for the analysis of emotion. They also developed a new corpus of antisocial behavior texts, which they believe will allow for deeper insight into the way antisocial behavior is expressed in written text.

Suffice to say, such technologies will be mainly useful for owners of large online communities that cannot police their content manually, yet nonetheless wish to ensure their platforms aren’t used to propagate hate speech. As the volume of content online rises, it seems inevitable that such automated tools will become commonplace.

Learn how taking a DataOps approach will help you speed up processes and increase data quality by providing streamlined analytics pipelines via automation and testing. Learn More.

big data ,artificial intelligence ,data analytics ,hate speech

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}