Working the Refs Worked: ‘How Facebook Wrote Its Rules to Accommodate Trump’

Elizabeth Dwoskin, Craig Timberg, and Tony Romm, reporting for The Washington Post in a remarkable story that takes a while to get to the juicy parts:

The document, which is previously unreported and obtained by The Post, weighed four options. They included removing the post for hate speech violations, making a one-time exception for it, creating a broad exemption for political discourse and even weakening the company’s community guidelines for everyone, allowing comments such as “No blacks allowed” and “Get the gays out of San Francisco.”

Facebook spokesman Tucker Bounds said the latter option was never seriously considered.

The document also listed possible “PR Risks” for each. For example, lowering the standards overall would raise questions such as, “Would Facebook have provided a platform for Hitler?” Bickert wrote. A carveout for political speech across the board, on the other hand, risked opening the floodgates for even more hateful “copycat” comments.

Ultimately, Zuckerberg was talked out of his desire to remove the post in part by Kaplan, according to the people. Instead, the executives created an allowance that newsworthy political discourse would be taken into account when making decisions about whether posts violated community guidelines.

I don’t get the “on the other hand” here regarding whether Facebook’s rules would have provided a platform for Adolf Hitler. A blanket “carveout” for “political speech” and “newsworthy political discourse” certainly would have meant that Adolf Hitler would have been able to use Facebook as a platform in the 1930s. That sounds histrionic to modern ears, but Hitler wasn’t universally seen as Hitler the unspeakably evil villain until it was too late. Infamously, as late as August 1939 — 1939! — The New York Times Magazine saw fit to run a profile under the headline “Herr Hitler at Home in the Clouds” (sub-head: “High up on his favorite mountain he finds time for politics, solitude and frequent official parties”).

An anything goes exception for political speech from world leaders is the exact same hand as serving as a platform for Hitler. It’s goddamn Nazis who are crawling out of the woodwork now.

Two months before Trump’s “looting, shooting” post, the Brazilian president [Jair Bolsonaro] posted about the country’s indigenous population, saying, “Indians are undoubtedly changing. They are increasingly becoming human beings just like us.”

Thiel, the security engineer, and other employees argued internally that it violated the company’s internal guidelines against “dehumanizing speech.” They were referring to Zuckerberg’s own words while testifying before Congress in October in which he said dehumanizing speech “is the first step toward inciting” violence. In internal correspondence, Thiel was told that it didn’t qualify as racism — and may have even been a positive reference to integration.

Thiel quit in disgust.

If that post is not dehumanizing, what is? If that post is acceptable on Facebook, what isn’t?

Facebook’s security engineers in December 2016 presented findings from a broad internal investigation, known as Project P, to senior leadership on how false and misleading news reports spread so virally during the election. When Facebook’s security team highlighted dozens of pages that had peddled false news reports, senior leaders in Washington, including Kaplan, opposed shutting them down immediately, arguing that doing so would disproportionately impact conservatives, according to people familiar with the company’s thinking. Ultimately, the company shut down far fewer pages than were originally proposed while it began developing a policy to handle these issues.

A year later, Facebook considered how to overhaul its scrolling news feed, the homepage screen most users see when they open the site. As part of the change to help limit misinformation, it changed its news feed algorithm to focus more on posts by friends and family versus publishers.

In meetings about the change, Kaplan questioned whether the revamped algorithm would hurt right-leaning publishers more than others, according to three people familiar with the company’s thinking who spoke on the condition of anonymity for fear of retribution. When the data showed it would — conservative leaning outlets were pushing more content that violated its policies, the company had found — he successfully pushed for changes to make the new algorithm to be what he considered more evenhanded in its impact, the people said.

Complaints about bias are only legitimate if the underlying allegations of unfairness are credible. Let’s say there’s a basketball game where the referees whistle 10 fouls against one team and only 1 against the other. Are the refs biased? We don’t know from those facts alone. What matters is how many fouls each team actually committed. If one team actually did commit 10 fouls and the other 1, then the refs are fair and the results are fair. If both teams actually committed, say, 5 fouls apiece, but the refs called 10 violations against one team and only 1 against the other, then the refs are either crooked or just plain bad and the results are, indeed, unjust.

Joel Kaplan wanted Facebook to call the same number of fouls against both sides no matter what fouls were actually committed. And that’s exactly how Facebook has run its platform. If you don’t punish cheaters and liars you’re rewarding them. Far from being biased against Republicans in the U.S. and right-wing nationalists and authoritarians around the globe, Facebook has been biased for them.