I’ve made a number of posts over the summer about attempts being made to both understand and detect the way misinformation spreads through social networks. Many of these attempts have focused upon identifying influential nodes in a network, and indeed understanding whether those nodes are real or not, as these nodes are key to the spread of information.
It was interesting therefore to see a slightly different approach taken by Facebook recently with the announcement of their satire tag. Facebook has become rather renowned for people taking stories from sites such as The Onion at face value and getting rather hot under the collar about whatever it is the site is spinning. Hence the testing of a [satire] tag to allow users to mark up particular pieces of content accordingly.
Now of course, you could say that is simply pandering to the daft amongst the Facebook population, and most people are well aware of what is satire and what isn’t, but it does raise an interesting question about how users themselves can help stop the spread of misinformation online.
After all, I wrote recently about a new venture called Grasswire, which is hoping to enroll the crowd to help them verify news items. The site, which focuses specifically on breaking news, allows users to vote on topics in a style similar to that found on sites such as Reddit. If users see something that is disputable, then they can both vote the content down whilst also posting a URL to a source that refutes that content. A similar process, albeit in reverse, can also be used to confirm a particular story.
Maybe that would be a slightly better use of the billion or so members Facebook apparently has. After all, the Boston Globe recently complained about the way Facebook would allow the spread of misinformation via the related articles feature on the site.
“If you are spreading false information, you have a serious problem on your hands. They shouldn’t be recommending stories until they have got it figured out,” said Emily Bell, director of Columbia Journalism School’s Tow Center for Digital Journalism, in an interview with the Boston Globe.
At the moment however, it seems Facebook are not interested in offering such a service. In response to the Boston Globe piece, they announced that they make no judgement about the accuracy of content shared in status updates, merely sharing what is popular.
If the average Facebook user is being fooled enough by satirical content however then surely it warrants some mechanism whereby users can feed back into the algorithm dishonest or incorrect content too?Original post