The Supreme Court is taking up a case centered on Internet platform liability, or lack of it, for things posted on those platforms by users. Wall Street Journal editors asked a couple of questions on the matter.
But are internet sites liable for the algorithms they use to sort and present content?
Liable for the algorithms in the legal sense? That’s an open question, and the Supremes are likely to answer it Gonzalez v Google, albeit unusefully narrowly.
However, the Internet sites most assuredly are responsible for the algorithms and what the algorithms sort and present. Those algorithms, after all, were written by the Internet sites’ human employees.
On the other hand,
Do social-media sites have immunity for fact-checks they append to disputed posts? What if search engines use language models to directly answer user queries, with text synthesized from the web?
Absolutely, they do not have immunity. This is the social-media site doing after-the-fact commenting on the legitimacy of what a user has posted, and so the site is creating its own liability with that after-the-fact legitimacy-checking. Keep in mind, too, those search engines, language models, and text synthesizing algorithms all are written by human employees of those social-media sites. Since that software does only what the human programmers code it to do, and those human programmers code what the site employs them to do, the use of that after-the-fact software deepens the social-media sites’ lack of immunity.