Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Study: Twitter Still Inundated With Junk Accounts Despite Recent Purging Efforts

DZone's Guide to

Study: Twitter Still Inundated With Junk Accounts Despite Recent Purging Efforts

A machine-learning approach could help identify thousands more fraudulent accounts and prevent millions of abusive tweets.

· AI Zone ·
Free Resource

The most visionary programmers today dream of what a robot could do, just like their counterparts in 1976 dreamed of what personal computers could do. Read more on MistyRobotics.com and enter to win your own Misty. 

Twitter across the globe

It’s no secret that social media platforms can be fairly easily co-opted for illegal – or at least questionably legal – purposes. With coding knowledge and the right API, pretty much anyone can create an automated bot account to spread the disinformation of their choosing.

But while Twitter has reportedly taken an aggressive approach to shutting down its most severe offenders, their methods leave a lot to be desired, at least according to a University of Iowa research team.

Using their own machine-learning approach, computer science professor Zubair Shafiq and graduate student Shehroze Farooqi believe they have shown conclusively “that many of these abusive apps used for all sorts of nefarious activity remain undetected by Twitter's fraud-detection algorithms, sometimes for months.” And because of this lag time in identification and subsequent deletion, they estimate that tens of millions of damaging tweets are slipping through the cracks.

"[Twitter has] said they’re now taking this problem seriously and implementing a lot of countermeasures,” says Shafiq. “The takeaway is that these countermeasures didn’t have a substantial impact on these applications that are responsible for millions and millions of abusive tweets."

Shafiq and Farooqi detail their machine-learning methodology in the report for their study: “In the offline phase, we train a supervised machine learning classifier that analyzes the first-k tweets of an application to detect abusive applications. More specifically, we extract a variety of user-based and tweet-based features to distinguish between benign and abusive applications. Using a labeled repository of tweets for benign and abusive applications, we then train a supervised machine learning classifier to detect abusive applications. In the online phase, we use the trained supervised machine learning model to detect abusive applications in the wild by analyzing their first-k tweets from Twitter’s streaming API.”

Although Shafiq and team have been reaching out to Twitter with their results, which demonstrate an impressive track record of identifying 93 percent of the accounts ultimately shut down by Twitter after just a few tweets, the company has yet to touch base concerning their methodology.

"Research based solely on publicly available information about accounts and tweets on Twitter often cannot paint an accurate or complete picture of the steps we take to enforce our developer policies," said a Twitter spokesperson.

Despite the fact that Shafiq’s team as well as other researchers attest to the soundness of the study’s findings, Twitter continues to keep their cards close to the chest, much like they do with their Daily Active User numbers.

(via Wired)

Robot Development Platforms: What the heck is ROS and are there any frameworks to make coding a robot easier? Read more on MistyRobotics.com

Topics:
machine learning ,twitter api ,security ,twitter ,fake accounts

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}