Facebook’s Secret Rule Book

Facebook has written a massive, byzantine, and secret document of rules packed with spreadsheets and power point slides to help it censor the news posted tackle misinformation posted to its facility.

Even the New York Times gets it, at least to an extent.

The closely held rules are extensive, and they make the company a far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself[.]

It’s also internally inconsistent.

The [NYT] discovered a range of gaps, biases and outright errors—including instances where Facebook allowed extremism to spread in some counties while censoring mainstream speech in others.

Are these deliberate?  It’s hard to believe the smartest kids, Mark Zuckerberg, Sheryl Sandberg, their management team, wouldn’t be doing this deliberately.  But it’s also hard to discern the logic of their inconsistencies in their rulebook, too.

Inconsistencies like

outsource[ing …] content moderation to other companies that tend to hire unskilled workers…. The 7,500-plus moderators “have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day. When is a reference to ‘jihad,’ for example, forbidden? When is a ‘crying laughter’ emoji a warning sign?”

Sara Su, a senior engineer on Facebook’s News Feed:

It’s not our place to correct people’s speech, but we do want to enforce our community standards on our platform. When you’re in our community, we want to make sure that we’re balancing freedom of expression and safety.

Facebook’s definition of “balance.”  Facebook’s definition of “freedom of expression.”  Facebook’s definition of “safety.”  And so Facebook, appropriately, does not try to correct speech.  Instead, it openly bans speech of which it—Zuckerberg and Sandberg—personally disapprove.  And so it bars some individuals altogether, it blocks some Presidential tweets, it blocks administration immigration advertisements.

An this, from Monika Bickert, Facebook’s global policy management honcho:

We have billions of posts every day, we’re identifying more and more potential violations using our technical systems.  At that scale, even if you’re 99% accurate, you’re going to have a lot of mistakes.

This is utterly disingenuous; it shows that Facebook isn’t even trying.  Not in a world where car makers and other manufacturers have, for years, demanded and achieved six-sigma accuracy.  Can’t reach six-sigma accuracy in speech censorship?  Not yet, perhaps.  But a serious effort would achieve better than 99%.  Or–work with me on this; it’s a concept still under development–maybe Facebook should stop censoring altogether.

Or: Facebook already is achieving that greater accuracy—it does, after all, succeed in censoring speech from the right side of center.  It hides its evident bias, though, behind an internally inconsistent, multi-thousand-page rule book.  Maybe that’s the logic to the inconsistencies.

And maybe that’s why they wanted to keep their rulebook secret.

2 thoughts on “Facebook’s Secret Rule Book

  1. The points you make are all good and important. I would like to add another point that can be found hidden in this article, though.
    Facebook employs more than 7500 people to perform this… “work”. 7500 human people, implicitly, hired not from the United States but outsourced to some country with no or a lesser minimum wage, of course. That’s just one company, albeit one with a government-supported near-monopoly.

    Something to remember the next time the left say, “it’s just the automation! Those jobs are never coming back!”

    • Your own valid point. And I’ll add one that I’ve made before specific to Facebook’s automation: Facebook’s algorithms didn’t coalesce out of the æther. Facebook’s human programmers wrote those algorithms. Facebook’s IT humans wrote the specs defining what those algorithms would find objectionable. Facebook’s human bosses–Zuckerberg and Sandberg–approved those specs.
      Coming back, Facebook’s human testers wrote the tests used to vet the realized algorithms. Facebook’s IT humans wrote the specs defining what those tests would look at and that defined success and failure. And Facebook’s human bosses–Zuckerberg and Sandberg–approved those specs.
      Automation, sure. But automation designed, written, and approved by humans, every step of the way, before they saw the light of day.
      Eric Hines

Leave a Reply to dsp Cancel reply

Your email address will not be published. Required fields are marked *