This Luddite remains strongly opposed to letting robots drive me around. However, the software that runs one version of a robot car, that package “guiding” Tesla’s latest iteration of its Full-Self Driving car, version 13.2, is a vast improvement of past efforts, according to BARRON’S.
Absent from the testing, though, at least as publicly reported, is how well 13.2 handles random (and frequent) traffic violations by the cars of other drivers that would endanger the occupants of the FSD or pedestrians or other vehicles. Such violations include the relatively minor, such as speeding; as well as the more dangerous wobbly bicycle(s) and inattentive bicyclists; pedestrians darting, at the last moment even, in front of the FSD in his last ditch effort to cross the road; crossing traffic running the red light or stop sign; oncoming traffic deciding to make a left turn at the last moment; the list is extensive.
Other risks are mostly in the residential neighborhood: the toddler in front of a parked car at the last moment darting into the street and the small pet under that parked car making the last moment dart into the street.
Many of those situations are difficult enough for a human driver to answer, often too difficult and the collision occurs.
Any robot-driven car needs to be able to handle those random situations at least as well as any experienced human driver.
Then there’s the classic moral paradox, usually cast in terms of a railroad exercise regarding which track to be switched to given the certainty of some measure of death regardless of the choice. Those choices occur on roads with cars and trucks, also, and they’re often badly handled by the human drivers involved. What can we expect from robot software?
To repeat: I remain strongly opposed to letting robots drive me around. The software involved is improving, but enough so? What constitutes sufficient improvement? At the least, satisfactory handling of the above situations.