Bots and Comments

As a result of a bit of censorship in which The Wall Street Journal engaged on a comment of mine over the weekend, I had the following exchange with them.

On Sunday, in responding to their piece on trade and tariffs, I tried to post the comment below to the WSJ‘s Comments section, but they blocked it: there were, they claimed, one or more offending words in it [the non-italicized sentences are cut/paste quotes from the article].

[T]he decision’s timing risks deepening the already bitter trade fight by starting another tit-for-tat round of tariffs.
The tariffs are bound to complicate—if not derail—talks with top Chinese officials, which are currently scheduled in Washington for Sept. 27 and Sept. 28, say people familiar with the plans.
Another interpretation, carefully ignored by the authors, is that in any conflict, it’s necessary to keep pressure on the opposing side while negotiations occur.  The battlefield shapes the talks, and the talks shape the battlefield–the battlefield encompasses both the talks and the conflict.

I emailed the WSJ‘s comment facility, per their blocking message, asking what the offending word or words were and why they were not identified in the blocking message.

I got a same-day response to my email; kudos to the WSJ.

“Thank you for contacting us. Our filter blocked your comment for the word ‘tit’; we have approved your post and apologize for any inconvenience.”

I asked the obvious question: why is “tit” allowed in the article itself if it’s not allowed in the comments?  Their answer:

Our filter is automatically set up to block certain words that may be used in a less than pleasant manner in the comments sections. We will review this word, however.

This is an example of the failure, here including outright hypocrisy, of using AI bots in place of actual judgment.  I won’t comment on the snowflakiness of “less than pleasant manner;” that speaks well enough for itself.

Leave a Reply

Your email address will not be published. Required fields are marked *