Jacob Gershman has a piece in The Wall Street Journal‘s Law Blog about the increasing use of software algorithms to assess newsworthiness and the implications of that increasing use on legal assessments of the tradeoffs between individual privacy and what’s fit to print. In it, Gershman quotes Georgetown University Associate Professor of Legal Research and Writing Erin Carroll.
Given the dominance of platforms like Facebook, the related influence of algorithms on how news is made, and specifically how algorithms are beginning to supplant editorial discretion and the editorial process, courts need to rethink their rationales for deference to the press. In the realm of privacy law, courts have long trusted the Fourth Estate to vet the newsworthiness of a subject before publishing, so that the courts themselves did not have to. Today, that trust is becoming misplaced.
Carroll is right that courts need to “rethink their rationales for deference to the press,” but for reasons wholly independent of the existence of news algorithms. On the contrary, Carroll has a couple of misapprehensions here. For one, editors (and publishers, come to that) certainly are outsourcing the work of assessments in making editorial decisions, but they cannot outsource their own responsibilities in the newsworthy judging process.
For another, related thing, there’s no reason to believe the courts’ trust “is becoming misplaced” due to the use of news algorithms.
In both instances, Carroll has missed the key factor: it’s entirely the editors’ and publishers’ decisions to use new algorithms, and it’s entirely the editors’ and publishers’ decisions to use any part of the algorithms’ outputs. The editors and publishers remain entirely and solely responsible for the material they publish—whether that material originates from interns, news algorithms, or journalists.
Full stop.
Update: Missing word is no longer missing.