Self-driving cars are not as close as you think – unless you hire a chauffeur.
In the mid-1980s Marvin Minsky and Roger Schank predicted the coming AI winter. Like a housing bubble or the irrational exuberance of a stock market run, the enthusiasm for AI had ramped up unrealistic expectations and then came crashing down. But why? It’s important to understand because self-driving cars and the associated technologies are exhibiting many of the same characteristics.
In the late 1970s and early 1980s AI technologies made gob smacking progress. Academics waged debates akin to holy wars about the “correct” way to do AI. Should one design the software to teach itself and to learn or should one write the rules? Handwriting recognition yielded to the former while artificially intelligent systems for diagnosing illnesses worked better as sets of rules.
And machines became very good at reading and handwriting recognition and at diagnostics. But not good enough. Accuracies of 99% were touted and that sounds great. On a single page of text there are roughly 3,000 characters. If 1 of every 100 letters is wrong then there are 30 misspellings per page.
The Armageddon that brought on the AI winter was when military funding dried up. Or, perhaps more accurately, when the military gave up. One can almost sense the excitement they must have felt at the idea of “smart bombs” and “smart missiles”. But once again, close wasn’t enough.
And the blistering AI progress cooled to sub-zero winter. Getting that last 1% was harder than thought. Perhaps the whole approach was a blind alley? Maybe it isn’t possible to get 100% letter recognition without the context of the word or even the sentence or paragraph. That might mean the entire approach was wrong.
But who could tell? Maybe one small tweak to a single line of code would crack the nut. Or not. On reflection it’s obvious that the brain doesn’t just decipher individual letters in isolation when we read a sentence.
And now we turned for wisdom to the linguists, biologists, psychologists, and all the non-programmers who understood how humans did the trick.
In any case, the military stopped paying for lack of progress — AI winter fell. The AI community settled down for a long winter’s hibernation.
In a related field, we’ve found that as automatons and robots become more life-like, they become creepier. We made good progress until we hit that particular fact. If robots are cute little dogs or plastic wheeled robots, we find them interesting, amusing or fun. But if they are rubber faced humanoids talking to us we find them, well, creepy. And it is so pronounced and difficult to get past that researches gave it a name – the “uncanny valley”. If anyone thought about it they’d have realized we’ve known about the effect for quite some time. Dolls that look just a little too human but not quite human enough are the stuff of horror movies. Manikins lurking in the dark, as if they are about to reach out and grab you, are another favorite monster movie regular.
It turns out that the valley between looking close to human without looking creepy is vast. Making an automaton indistinguishable from human is a few major scientific breakthroughs away. Androids aren’t going to infiltrate our ranks and fool us any time soon.
We should look to these previous case for some guidance on where self-driving automobiles are going. What did these other experiences teach us?
Rapid progress augured well for reaching an end goal – artificially intelligent machines for war or peace, androids serving our whims and charming us with whimsy. But that progress slowly ground to a crawl.
As we approached the various goals, it became clear the goal itself was unclear because we didn’t really understand it in human terms. How does the brain accurately and quickly read handwritten notes or different typed fonts? Is that an appropriate approach to use with a computer or is there an entirely different approach that is more suitable? What aesthetic triggers in our brain make something look uncanny? Is it viscerally indicative of something physically or mentally diseased. Does that provoke perception of illness or madness provoke revulsion? How do we get past it? How would we ever know when we witnessed a bomb being smart? Would it refuse to sacrifice itself and therefore not explode?
What is common to these areas is that, while technical progress is rapid, and we rapidly progress toward those goals, ultimately we didn’t understand what the goal is. Handwriting recognition is a simple matter of letter recognition until it isn’t. Then it’s a bigger problem or a completely different problem.
When we find ourselves thwarted, we realize that the exercise actually raises more questions and brings out fundamental questions about what it is to be human. The questions to be answered then belong to biologists, psychologists, philosophers, artists, cosmeticians, writers and other non-technical folks.
Here we are racing along to the self-driving car. There are many wonderful new automotive technologies and more come out each year. So many new technologies represent the building blocks required for self-driving cars and those building blocks are being provided so quickly it follows that this bottom up approach will result in self-driving cars – eventually.
Or will it.
A Passive and Reactive Past
Until computing, sensors, actuators and software development reached a certain level, the idea of a self-driving car was science fiction. The advances of the past were reactive – seat belts and shoulder harnesses, air bags, automatic headlights and windshield wipers, anti-lock brakes, cruise control and so on. In a sense, these are fundamental building blocks toward a self-driving car. Calling these technologies passive or reactive is not slander. It’s hard to guess how many accidents were avoided because headlights were on automatically or because anti-lock disc brakes prevented the automobile from careening out of control like an older car with lock ‘em up drum brakes. Even when accidents weren’t avoided, it is impossible to guess at how many deaths and injuries were prevented because of a seat belt, shoulder harness and head bolster. When a car is hit head on and the engine slides under the car and the crumple zones vector force away from the driver, the likelihood of death or injury is greatly reduced. It’s improbable that we’ll find many new technologies that compete with those older “passive” ones in terms of the lives they’ve saved and will continue to save.
The safety of a self-driving car must be patent before people will sit in them and trust them. If these reactive items didn’t exist already they’d have to be invented in order for a self-driving car to be practical. So all these safety technologies going back to the 1950s (and before) are all necessary building blocks toward making people believe a self-driving car is safe.
In more recent times other reactive technologies have come to the fore. Cars that automatically brake if they sense a barrier approaching too quickly – be it a car, a tree or a wall. Back up sensors blare if you get too close to an object. There are lane wander avoidance mechanisms that gently tug at the steering wheel if you are meander out of your lane. Back up cameras and sideview mirror cameras now give the driver a fuller view of all the surrounding.
Each year we get more of these safety features and each of them is a necessary component to a self-driving car. These new reactive features make vehicles far safer and their value in that regard alone can’t be estimated.
A Proactive Future
Already we have self-driving cars that motor around tracks and follow directions. They parallel park themselves when instructed to do so. Many of these are available or becoming available today while others are the promises of tomorrow. A car that can follow a GPS signal on a map to get you to work and can even dead reckon when that signal is lost isn’t that far-fetched. Obviously, all the safety features we have now and many more are required before that becomes real. But those fundamentals have been developed.
It seems likely that in the very near future we’ll have cars that exhibit “flocking” behavior. Cars will be able to drive in heavy traffic like a murmuration of starlings or a school of anchovies. The movement is so coordinated and synchronized that some communications via sound or scent or group-mind must be at work, mustn’t it?
Interestingly the flocking and schooling behavior is based on three simple rules and isn’t that telepathic after all. Automobiles programmed to follow those rules wouldn’t need to communicate with one another in order to follow the rules. In fact, new smart murmuration vehicles could hit the market and be switched on during rush hour traffic in the midst of a number of “dumb” cars that we have on the road today.
There are only three rules or principles to follow. Briefly those rules are about:
When birds flock together closely they look to their neighbors and change their own alignment to closely match their neighbors. Cohesion means getting as close to the group as possible but no more, a bird doesn’t lag behind and move out but steers toward the center of the near-by group. Separation is the opposite of cohesion in that the bird or anchovy, while desiring cohesion also wants enough distance from each of its neighbors to ensure enough time to react to changes.
In a sense, it isn’t too difficult to recognize how we already flow in a school of automobiles with roughly similar rules. Our alignment is such that we stay in our lanes and switch from one into another. Our cohesion means we stay close to the car in front of us to minimize the total distance taken by the entire group but we maintain enough separation to prevent accidents and to help prevent constant stop and start braking. Our alignment might also take into consideration keeping cars out of our blind spot and our separation keeps us prepared to brake or move lanes if someone moves into our lane.
It isn’t long before we have cars that do much of that for us. We already have cruise control – cars will go at a set speed. In a car of the future we might set maximums instead letting the car set the actual speed based on the actual freeway or street conditions. But now we have cheap radar, laser and video sensors that can be deployed to determine where every car in a 360-degree radius is at. With throttle control, accurate sensors, powerful CPUs and electronic steering (already on many models of cars whether folks are aware of it or not), all the elements for flocking behavior are in place.
Like the passive and active improvements of the past, flocking behavior is going to be an important step toward self-driving automobiles.
But flocking alone isn’t self-driving any more than cruise control is. It still requires a driver at the wheel ready to take over when necessary.
- “That’s my exit so I have to get over here.”
- “That car up there half a mile ran out of gas in my lane, I should get over.”
- “That person looks hurt, I should probably pull over there and see if I can help.”
Flocking will be very helpful for safety and comfort. Driving in rush hour traffic requires one to hold the wheel and keep an eye out, but there isn’t any gas-brake-gas-brake action. Just put the car into flock mode, much like cruise control, and let it control all that and even switch lanes in an emergency or honk the horn when there isn’t any other recourse but warning someone they are about to sideswipe you.
In short, it goes a long way to reducing the stress.
Are We There Yet?
And so we rush forward to the self-driving car. We are putting all the building blocks in place – GPS driven maps, a multitude of passive and active safety features, self-parking, flocking, sensors in a wide range of modes from infrared to sonar to reflected laser and so on. Certainly this must mean that the self-driving car is right around the corner, doesn’t it?
Here we go again. The building blocks and technologies of an endeavor are pouring out so fast and precipitously and we understand so many answers to the problems at hand that it appears that as we stack more and more of these blocks on top of one another we’ll simply get there.
But history teaches us that when we’ve built that pyramid up far enough we’ll get to a point of existential crisis. Will it even be possible to get to where we want to go with the approaches we are taking?
End of the Road
While certain thorny technical problems lie ahead, we’re are on the same road that leads back to that Liberal Arts college. The psychologists and philosophers and lawyers and the like are the ones who will be handed the problems created by the possibility of this new technology. Even if tomorrow we had a car that could leave your house, take you on surface streets to the freeway and drive you through bumper to bumper rush hour traffic and get you safely to work, it wouldn’t be enough. And we’re nowhere near that.
The problems here are more insidious. If you are sitting in your self-driving car, perhaps reading a book or watching TV, and a child darts out in the road, what’s your car going to do? If it can’t brake should it swerve? If it swerves into on-coming traffic does it gauge whether you’re likely to survive the accident? What if the other person isn’t likely to survive the head on collision? What are the probabilities that they might see you and swerve out of your way? If the car ‘decides’ that swerving into on-coming traffic is too problematic, does it now swerve the other way into the guard rail and light post or tree potentially killing you? Or, should it keep you safe and run over the child? Putting aside the moral decision making, that’s a lot of decisions to weight – will the other car kill me, will I kill him, will he swerve? All those probabilities and only probabilities.
What if it’s a dog instead? Can the car accurately make the distinction reliably enough to make that decision? If you can avoid the dog but result in side swiping another vehicle in the next lane, is that acceptable?
What if it is a blowing tumble weed?
Those are technical questions as much as ethical and legal questions.
Now the automaker along with their actuaries, lawyers, insurers and ethical philosophers have to sit down and decide what set of rules should be set in place, how much money will be required to cover death and dismemberment, and what the car will cost in order to cover it all.
But should the automaker decide that you are the one who should die if a child darts into the road? Or should they be the ones to decide, to your horror and endless nightmare, that you should watch as the car decides to run the child over.
And what happens when the sophisticated rules engine and neural network makes a decision, one day, to accelerate someone’s car into the side of a building. Then it happens again. And again. In a highly complex system with so many intelligent systems and subsystems interacting, it may be that the automaker will not be able to find or determine the cause of the bug. Are the artificial neural networks inadequately identifying an object? Is it conveyed but inexactly resulting in bad rule firing or crashed subsystems?
After a few years, state and federal lawmakers will put new laws in place and every year they’ll change because of society and technology. Philosophers will make varying ethical cases for the ranking of priorities, the Heritage Foundation, ACLU, NAACP and every other special interest will push their agenda – and your life, health, insurance and car costs will hang in the balance.
It isn’t just life or limb decisions either. What if you and your wife are on the way to the hospital because she’s having a baby. You tell the car, “step on it!” But it dutifully obeys that traffic laws. Or perhaps it does as you say. And if these cars behave dutifully, you’ll have the guy who is perpetually late for work every day telling his car to “step on it!”. Even if it doesn’t cause an accident, it might well create traffic problems and reactions to some aggressive behavior that the car exhibits because its owner has informed it that this is an emergency.
Can these cars be hacked? Can owners program them their own way? Can they be sabotaged by pimple faced hackers or held by ransomware developers?
Do these new automated cars have manual override? What happens if you are in a country lane made of gravel or one with ruts or even out in the middle of a field with no road at all. Are you just out of luck or are you going to drive by mouse and hope the car doesn’t get stuck? Can it recognize a slippery road, a muddy bog, loose gravel, or a pile of nails? You and I can fairly easily. But we can also read the letters of a hand-written note with better than 99% accuracy.
If they have manual override, how many people will remember to drive in a few years? Do you want someone driving who came of age after the self-driving car became real?
The Uncanny Alley
Now we enter the Uncanny Alley. That wide gap between where we think we are, how rapidly we believe we are progressing, and ultimately, that place where the nexus of our humanity and technology raise questions not anticipated by our designs.