The Wall Street Journal‘s L. Gordon Crovitz is writing about the really quite nearby future of driving. There is an upside to this:
Tom Vanderbilt, author of the book Traffic, writes: “After a few minutes the idea of a computer-driven car seemed much less terrifying than the panorama of indecision, BlackBerry-fumbling, rule-flouting, and other vagaries of the humans around us—including the weaving driver who struggled to film us as he passed.”
There’s a downside, too, beyond the aspects of the (still very important in my not at all humble view) sheer joy of driving my car and the need for a human to be in control of his machine, rather than the other way around:
[T]he legal and regulatory system will need to accept that driverless cars sound risky only compared with cars driven by error-prone humans. Among looming questions certain to be relished by plaintiff lawyers: If people aren’t driving, who will be liable for accidents? Car makers? Manufacturers of GPS hardware? Software companies?
The answer to the liability question is straightforward and easily enforced, though.
The larger problem I have with such a system is this. We need to improve, vastly, our computer and communications systems security before such a thing becomes viable and truly safe. In an age where we can’t even protect our power and water grid, much less our Space and Defense systems or our GPS use, from either foreign or domestic hacking, why would we want to expose our transportation grid to similar hack jobs and shutdown?