Wrong Mindset

As self-driving cars, misnomerly termed AI-driven cars, become more common, or at least less uncommon, there is growing concern about the ethics of the AIs involved. It’s a valid question, but it aims at the wrong target.

What will the roadway scruples of AI look like?

What to do about avoiding collisions vs the consequences of an avoidance maneuver? Relatedly, dodge the large animal but don’t bother about the small animal? Which property is more legitimate to avoid if any avoidance maneuver means collision with something or damage to the maneuvering car—into the ditch or into the tree to avoid another collision.

There are easier questions, too, whose answers left to the AI are just examples of laziness.

Is taking the fastest route the core metric that should guide autonomous vehicles, or are other factors just as relevant? Focusing on getting to the destination quickly would allow self-driving ride-share vehicles to make more trips and more profit, but might result in more danger. Giving priority to safety alone could slow and snarl traffic. And what about choosing routes that let passengers enjoy the journey?

After all,

The trade-off between safety and speed is “the one thing that really affects 99% of the moral questions around autonomous vehicles,” says Shai Shalev-Shwartz, chief technology officer at Mobileye…. Shifting parameters between these two poles, he says, can result in a range of AI driving, from too reckless to too cautious, to something that seems “natural” and humanlike.
The software can also be calibrated to allow different driving styles, he says. So, for example, an autonomous sports car might drive more aggressively to enhance a sense of performance, an autonomous minivan might put the biggest emphasis on safety and an off-road vehicle might default to taking a scenic route.

There’s no need to leave those factors solely in the software. It’s just not that hard to break them out into separate routines, with the human car driver selecting one when he gets in and boots up his car. Planning a trip is what humans do now, whether a trip across town or to another city or the short trip to the grocery store, even if the latter is planned sub rosa. That needs to be in the hands of the human.

Even if we decide to turn driving over to the AI system running the autonomous vehicle, only the driving itself should be turned over—while maintaining human oversight and real-time overruling capability and responsibility during the driving. It’s hard enough for a human to make the value judgment call regarding a broad variety of collision scenarios, especially regarding those outlined at the start of this post. It’s impossible, at least for the foreseeable future, for humans to write code to wire those judgments into the software running a car.

Autonomous vehicles cannot be only AI operated. The human must be responsible for the decisions he makes in validating the car’s real-time decisions, or overruling them. Or flipping a switch and taking over the task of driving because the software has gotten over its code.

Leave a Reply

Your email address will not be published. Required fields are marked *