Facebook has written a massive, byzantine, and secret document of rules packed with spreadsheets and power point slides to help it censor the news posted tackle misinformation posted to its facility.
Even the New York Times gets it, at least to an extent.
The closely held rules are extensive, and they make the company a far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself[.]
It’s also internally inconsistent.
The [NYT] discovered a range of gaps, biases and outright errors—including instances where Facebook allowed extremism to spread in some counties while censoring mainstream speech in others.
Are these deliberate? It’s hard to believe the smartest kids, Mark Zuckerberg, Sheryl Sandberg, their management team, wouldn’t be doing this deliberately. But it’s also hard to discern the logic of their inconsistencies in their rulebook, too.
outsource[ing …] content moderation to other companies that tend to hire unskilled workers…. The 7,500-plus moderators “have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day. When is a reference to ‘jihad,’ for example, forbidden? When is a ‘crying laughter’ emoji a warning sign?”
Sara Su, a senior engineer on Facebook’s News Feed:
It’s not our place to correct people’s speech, but we do want to enforce our community standards on our platform. When you’re in our community, we want to make sure that we’re balancing freedom of expression and safety.
Facebook’s definition of “balance.” Facebook’s definition of “freedom of expression.” Facebook’s definition of “safety.” And so Facebook, appropriately, does not try to correct speech. Instead, it openly bans speech of which it—Zuckerberg and Sandberg—personally disapprove. And so it bars some individuals altogether, it blocks some Presidential tweets, it blocks administration immigration advertisements.
An this, from Monika Bickert, Facebook’s global policy management honcho:
We have billions of posts every day, we’re identifying more and more potential violations using our technical systems. At that scale, even if you’re 99% accurate, you’re going to have a lot of mistakes.
This is utterly disingenuous; it shows that Facebook isn’t even trying. Not in a world where car makers and other manufacturers have, for years, demanded and achieved six-sigma accuracy. Can’t reach six-sigma accuracy in speech censorship? Not yet, perhaps. But a serious effort would achieve better than 99%. Or–work with me on this; it’s a concept still under development–maybe Facebook should stop censoring altogether.
Or: Facebook already is achieving that greater accuracy—it does, after all, succeed in censoring speech from the right side of center. It hides its evident bias, though, behind an internally inconsistent, multi-thousand-page rule book. Maybe that’s the logic to the inconsistencies.
And maybe that’s why they wanted to keep their rulebook secret.