45
Carnegie Mellon Researchers: Half of Twitter Accounts Discussing COVID-19 Are Disinformation Bots

Karen Hao, writing for MIT Technology Review:

Kathleen M. Carley and her team at Carnegie Mellon University’s Center for Informed Democracy & Social Cybersecurity have been tracking bots and influence campaigns for a long time. Across US and foreign elections, natural disasters, and other politicized events, the level of bot involvement is normally between 10 and 20%, she says.

But in a new study, the researchers have found that bots may account for between 45 and 60% of Twitter accounts discussing covid-19. […] Through the analysis, they identified more than 100 types of inaccurate covid-19 stories and found that not only were bots gaining traction and accumulating followers, but they accounted for 82% of the top 50 and 62% of the top 1,000 influential retweeters. […]

Unfortunately, there are no easy solutions to this problem. Banning or removing accounts won’t work, as more can be spun up for every one that is deleted. Banning accounts that spread inaccurate facts also won’t solve anything.

I don’t understand this conclusion at all. If a team at Carnegie Mellon can do this research, so too could a team at Twitter itself. Or Twitter could just use outside teams like the one at Carnegie Mellon.

What we know is that bots are harmful — they spread misinformation with disastrous real-world effect. And we know that both bot accounts and disinformation in the content of posts can be identified at scale, algorithmically. On a social network, anti-disinformation software wouldn’t have to eradicate all disinformation to be radically effective — it only needs to start with the posts that are reaching the most people and work down the popularity graph from there.

The argument that Twitter and Facebook can’t beat disinformation by banning it is like arguing that email providers can’t beat spam. Spam hasn’t been eradicated but it has been effectively diminished. There’s absolutely no reason Twitter and Facebook can’t defeat social media disinformation to the same degree we’ve defeated spam email. They haven’t done so because they don’t want to, presumably because they consider the “engagement” generated by these bots worth the social destruction they cause.

Update: Maybe it’s not “engagement” but “active users”. Or both. What matters is that so long as looking the other way at bot activity increases the metrics used to value Twitter and Facebook, Twitter and Facebook have perverse incentives not to combat bot activity to the extent that they could. The email spam analogy holds — conversely, email providers have zero incentive to allow spam into your mailbox because no one values the worth of an email provider by the number of messages in its user’s inboxes. (Also, you don’t find anyone yelling about spam filtering being a suppression of “free speech”.)

Saturday, 23 May 2020