Woke-ism

A Wall Street Journal article centered on Canada’s Liberal Prime Minister Pierre Trudeau resigning (being updated, as this article was published very shortly afterward) closed with this:

Liberal lawmaker Wayne Long said the Liberal Party under Trudeau has swung too much to the left on the political spectrum, much like the Democrats in the US.
“I don’t want to use the word wokeism, but we’ve doubled down on things where we’ve come out as a moral authority,” said Long, who isn’t seeking re-election this year. “People are tired of it. It doesn’t mean that they’re right and we’re wrong, but reality bites, and reality sometimes sucks.”

The Liberals still are unable to recognize that their policies are plain wrong and destructive. They’re only willing to say they were unable to sell their stuff to the public. Much like the Progressive-Democratic Party at home.

Self-Driving Cars

This Luddite remains strongly opposed to letting robots drive me around. However, the software that runs one version of a robot car, that package “guiding” Tesla’s latest iteration of its Full-Self Driving car, version 13.2, is a vast improvement of past efforts, according to BARRON’S.

Absent from the testing, though, at least as publicly reported, is how well 13.2 handles random (and frequent) traffic violations by the cars of other drivers that would endanger the occupants of the FSD or pedestrians or other vehicles. Such violations include the relatively minor, such as speeding; as well as the more dangerous wobbly bicycle(s) and inattentive bicyclists; pedestrians darting, at the last moment even, in front of the FSD in his last ditch effort to cross the road; crossing traffic running the red light or stop sign; oncoming traffic deciding to make a left turn at the last moment; the list is extensive.

Other risks are mostly in the residential neighborhood: the toddler in front of a parked car at the last moment darting into the street and the small pet under that parked car making the last moment dart into the street.

Many of those situations are difficult enough for a human driver to answer, often too difficult and the collision occurs.

Any robot-driven car needs to be able to handle those random situations at least as well as any experienced human driver.

Then there’s the classic moral paradox, usually cast in terms of a railroad exercise regarding which track to be switched to given the certainty of some measure of death regardless of the choice. Those choices occur on roads with cars and trucks, also, and they’re often badly handled by the human drivers involved. What can we expect from robot software?

To repeat: I remain strongly opposed to letting robots drive me around. The software involved is improving, but enough so? What constitutes sufficient improvement? At the least, satisfactory handling of the above situations.