Take, for instance, a self-driving car. One of the assumptions we have is that allowing computers to drive cars will allow a lot more cars to be on the road, since computers are better drivers than humans (a fact I don’t want to dispute). But imagine we do fit 30% more cars on the road. Imagine a traffic disruption. There will surely be far fewer traffic disruptions because computers are better drivers than humans. But when they do occur, they will cause massively more congestion than now, because the system will have been optimised that much further.
A driverless car will be best implemented when it communicates with its peers in a networked way that mimics the old CB network band: “Get off at Exit 351 and take US 31 north; I-65 is a parking lot.” But there’s fragility, of course: not all cars will have humans out of the loop, not everyone will have a car that communicates in the same way, there will be network outages, etc. That’s why peer-to-peer on open technologies will make that work.
See, my technological bias is showing. But I will also admit my own bias against driverless cars: I’d rather drive, and if not, I’d rather take mass transit to have it be worthwhile.
I thought a bit about this when I was writing the original post. This is very true. If we allow cars to function like nodes on a network (the internet model) and route around damage to the network, we can probably mitigate a lot of the potential problems of driverless cars that operate in a vacuum, informationally sealed from the surrounding vehicles. Basically we start treating cars like packets.
I see a couple problems. Even assuming that all cars on the road are driverless (there are no pesky humans behind the wheel ever), we’re always designing our transportation system for peak loads. One of the points of driverless cars is to distribute peak load more evenly, and therefore optimize drive time by taking different routes that make sense depending on the situation. However, if the system is optimised so that peak load and a problem co-incide, there might not be anywhere to go to route around damage. That is to say, if the network is fully optimised, at peak loads, there’s a fine line between completely fine and completely chaotic. And building new roads makes laying fibre look like playing with LEGO.
The other thing I’ve though about is how focused we are on designing for and controlling a certain set of parameters. For instance, we might optimise for peak load, for shortest travel time, etc, and in order to do that we might have to network cars and let them talk to each other. And we’ve seen what that looks like with PCs. What happens when you take previously un-networked things and connect them to a network? It’s a hard thing to do. If we do network driverless cars, it’s likely that a lot of the damage to the network will suddenly take on a very much more sinister aspect: Hacking. We’d do well to consider the example of casinos: They design for and control their gains and losses so that the house always wins. Except when losses come from directions they simply can’t control for. Fire, earthquake, employees suing their bosses, catastrophic losses of life, etc. I worry that we might design for and attempt to control the variables of traffic without considering the very real possibility of damage that come from a direction we weren’t expecting.
Maybe we’ll find something even more exotic when we’re done making the perfect car. What I actually hope is that we rediscover the train. We may find that once we have the perfect driverless car, gas is too expensive to actually run a car. But that would mean we’d have to reconfigure our entire way of live. And that’s a lot less likely to happen.