Over a million developers have joined DZone.

A new system is coming after Twitter bots

·

The amount of fakery on Twitter is pretty well known now, with an industry worth several million dollars helping to boost the ego, and follower list, of people around the world.  Indeed, some research went as far as to suggest that 50% of the followers of big brand accounts were fake.  There have been various attempts to understand the amount of fake followers an account has, with Status People being perhaps the most well known.

Researchers from Indiana University believe they’ve built a tool that can do a better job of determining whether a Twitter account is real or not.  The tool, which they’ve called BotOrNot, explores over 1,000 features from an accounts content, it’s friendship network and even temporal information.  It uses this data to calculate whether the account is real or not.

The tool has been funded by the National Science Foundation and the US Military, who believe it will yield good insights into the way information flows through complex networks, and in particular how deception can occur in the digital world.

“We have applied a statistical learning framework to analyze Twitter data, but the ‘secret sauce’ is in the set of more than one thousand predictive features able to discriminate between human users and social bots, based on content and timing of their tweets, and the structure of their networks,” said Alessandro Flammini, an associate professor of informatics and principal investigator on the project. “The demo that we’ve made available illustrates some of these features and how they contribute to the overall ‘bot or not’ score of a Twitter account.”

The tool utilized these particular features alongside some bots created by Texas A&M University to train the tool to accurately determine a bot from a real user.  The researchers believe that they can do so now with 95% accuracy.

“Part of the motivation of our research is that we don’t really know how bad the problem is in quantitative terms,” said Fil Menczer, the informatics and computer science professor who directs IU’s Center for Complex Networks and Systems Research, where the new work is being conducted as part of the information diffusion research project called Truthy. “Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”

The application was developed primarily to test for the subversive role such bots can play in civic life, whether it’s distorting election chatter or causing panic during an emergency.  The more that is known about the spread of misinformation, the better equipped people will be to counter it.

Suffice to say however, that many Twitter users will want to test their own performance on the tool.  As an experiment of my own, I ran some of the prominent Twitter users highlighted in the Triberr 100 Most Influential Bloggers list produced recently.  Thus far, not one of the top 10 has returned a ‘bot score’ of less than 50%.  Make of that what you will.

Original post
Topics:

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}