Jeff Horwitz and Deepa Seetharaman, reporting for The Wall Street Journal (Apple News+ link):
“Our algorithms exploit the human brain’s attraction to
divisiveness,” read a slide from a 2018 presentation. “If left
unchecked,” it warned, Facebook would feed users “more and more
divisive content in an effort to gain user attention & increase
time on the platform.” […]
But in the end, Facebook’s interest was fleeting. Mr. Zuckerberg
and other senior executives largely shelved the basic research,
according to previously unreported internal documents and people
familiar with the effort, and weakened or blocked efforts to apply
its conclusions to Facebook products.
Polarizing divisive content is to Facebook as nicotine is to cigarette makers: a component of their product which their own internal research shows is harmful, but which they choose to increase, rather than decrease, because its addictiveness is so profitable.
A 2016 presentation that names as author a Facebook researcher and
sociologist, Monica Lee, found extremist content thriving in more
than one-third of large German political groups on the platform.
Swamped with racist, conspiracy-minded and pro-Russian content,
the groups were disproportionately influenced by a subset of
hyperactive users, the presentation notes. Most of them were
private or secret.
The high number of extremist groups was concerning, the
presentation says. Worse was Facebook’s realization that its
algorithms were responsible for their growth. The 2016
presentation states that “64% of all extremist group joins are due
to our recommendation tools” and that most of the activity came
from the platform’s “Groups You Should Join” and “Discover”
algorithms: “Our recommendation systems grow the problem.”
Those recommendation algorithms are the heart of the matter. In the old days, on, say, Usenet, there were plenty of groups for extremists. There were private email lists for extremists. But there was no recommendation algorithm promoting those groups.
The engineers and data scientists on Facebook’s Integrity Teams — chief among them, scientists who worked on newsfeed, the stream of
posts and photos that greet users when they visit Facebook — arrived at the polarization problem indirectly, according to
people familiar with the teams. Asked to combat fake news, spam,
clickbait and inauthentic users, the employees looked for ways to
diminish the reach of such ills. One early discovery: Bad behavior
came disproportionately from a small pool of hyperpartisan users.
A second finding in the U.S. saw a larger infrastructure of
accounts and publishers on the far right than on the far left.
Outside observers were documenting the same phenomenon. The gap
meant even seemingly apolitical actions such as reducing the
spread of clickbait headlines — along the lines of “You Won’t
Believe What Happened Next” — affected conservative speech more
than liberal content in aggregate.
That was a tough sell to Mr. Kaplan, said people who heard him
discuss Common Ground and Integrity proposals. […] Every
significant new integrity-ranking initiative had to seek the
approval of not just engineering managers but also representatives
of the public policy, legal, marketing and public-relations
So Facebook’s “Integrity Teams” can’t enforce integrity if it upsets the side of the U.S. political fence that is, quite obviously, more lacking in integrity.
★ Wednesday, 27 May 2020