Self-driving cars are not as close as you think – unless you hire a chauffeur.

Sleepy

In the mid-1980s Marvin Minsky and Roger Schank predicted the coming AI winter. Like a housing bubble or the irrational exuberance of a stock market run, the enthusiasm for AI had ramped up unrealistic expectations and then came crashing down. But why? It’s important to understand that history because self-driving cars are traveling down that same road.

In the late 1970s and early 1980s, AI technologies made gob smacking progress. Academics waged debates akin to holy wars about the “correct” way to do AI. Should one design the software to teach itself and to learn or should one write the rules? Handwriting recognition yielded to the former while artificially intelligent systems for diagnosing illnesses worked better as sets of rules.

And machines became good at reading and handwriting recognition and at diagnostics. But not good enough. Accuracies of  99% were touted and that sounds great. On a single page of text there are roughly 3,000 characters. If 1 of 100 letters is wrong then there are 30 misspellings per page.

The Armageddon that brought on the AI winter was when military funding dried up. Or, perhaps more accurately, when the military gave up. One can almost sense the excitement they must have felt at the idea of “smart bombs” and “smart missiles”. But once again, close wasn’t enough.

And  the blistering AI progress cooled to sub-zero winter. Getting that last 1% was harder than thought. Perhaps the whole approach was a blind alley? Maybe it isn’t possible to get 100% letter recognition without the context of the word or sentence or paragraph. That might mean the entire approach was wrong.

But who could tell? Maybe one small tweak to a single line of code would crack the nut. Or not. On reflection it’s obvious that the brain doesn’t decipher individual letters in isolation when we read a sentence.

And so we turned for wisdom to the linguists, biologists, psychologists, and all the non-programmers who understood how humans do the trick.

In any case, the military stopped paying for lack of progress as 99% just wasn’t good enough. AI winter  fell and AI community settled down for a long winter’s hibernation.

Creepy

In a related field, we’ve found that as automatons and robots become more life-like, they become creepier. We made good progress until we hit that particular fact.  If robots are cute little dogs or plastic wheeled robots, we find them interesting, amusing or fun. But if they are rubber faced humanoids talking to us we find them, well, creepy. And it is so pronounced and difficult to get past that researches gave it a name – the “uncanny valley”.  If anyone thought about it they’d have realized we’ve known about the effect for quite some time. Dolls that look just a little too human but not quite human enough are the stuff of horror movies. Manikins lurking in the dark, as if they are about to reach out and grab you, are another favorite monster movie regular.

It turns out that the valley between looking close to human without looking creepy is vast. Making an automaton indistinguishable from human is a few major scientific breakthroughs away. Androids aren’t going to infiltrate our ranks and fool us any time soon.

Lessons

We should look to these previous cases for some guidance on where self-driving automobiles are going. What did these other experiences teach us?

Rapid progress augured well for reaching an end goal – artificially intelligent machines for war or peace, androids serving our whims and charming us with whimsy. But that progress slowly ground to a crawl.

As we approached the various goals, it became clear the goal itself was unclear because we didn’t really understand it in human terms.  How does the brain accurately and quickly read handwritten notes or different type fonts? What aesthetic triggers in our brains make something look uncanny? Do we have a visceral reaction of a not quite normal looking face as indicative of physical or mental illness. Does that not-so-human face provoke revulsion because evolution taught us that reaction is safe? How would we ever know when we witnessed a bomb being smart? Would it refuse to sacrifice itself and therefore not explode?

What is common to these areas is that, while technical progress is rapid, we ultimately didn’t understand what the means to the goal. Handwriting recognition is a simple matter of letter recognition until it isn’t. Then it’s a bigger problem or a completely different problem. Making latex faces works until it scares the bejesus out of us.

When we find ourselves thwarted, we realize that the exercise actually raises more questions about what it is to be human. Those questions must be answered by biologists, psychologists, philosophers, artists, cosmeticians, and writers.

Speedy

Here we are racing toward self-driving car. There are new automotive technologies each year and many represent the building blocks required for self-driving cars. Those building blocks are coming so fast that this bottom up approach will soon result in self-driving cars being everywhere.

Or will it.

A Passive and Reactive Past

Until computing, sensors, actuators and software development reached a certain level, the idea of a self-driving car was science fiction. The advances of the past were reactive – seat belts and shoulder harnesses, air bags, automatic headlights and windshield wipers, anti-lock brakes, cruise control and so on. In a sense, these are fundamental building blocks toward a self-driving car. Calling these technologies passive or reactive is not slander. It’s hard to guess how many accidents were avoided because headlights were on automatically or because anti-lock disc brakes prevented the automobile from careening out of control like an older car with lock ‘em up drum brakes. Even when accidents weren’t avoided, it is impossible to guess at how many deaths and injuries were prevented because of a seat belt, shoulder harness and head bolster. When a car is hit head on and the engine slides under the car and the crumple zones vector force away from the driver, the likelihood of death or injury is reduced. It’s improbable that we’ll find many new technologies that compete with those older “passive” ones in terms of the lives saved.

The safety of a self-driving car must be patent before people will sit in them and trust them. If these reactive items didn’t exist already they’d have to be invented in order for a self-driving car to work. All these safety technologies going back to the 1950s (and before) are necessary building blocks a safe, self-driving car but they are all passive and reactive.

In more recent times, reactive technologies have come to the fore. Cars that automatically brake if they sense a barrier.  Back up sensors blare if you get too close to an object. There sensor and actuators that gently tug at the steering wheel if you meander out of your lane. Back up cameras and sideview mirror cameras now give the driver a fuller view of all the surrounding. These are not just passive as in previous generations but actually sense and react to the environment. This too is a prerequisite to a safe, self-driving car.

Each year we get more of these safety features and each of them is a component of a self-driving car. These new reactive features make vehicles far safer and their value in that regard is inestimable.

But passive and reactive technologies are not the proactive technologies necessary for a self-driving car.

A Proactive Future

Already we have self-driving cars that motor around tracks and follow directions.  Some are on the roads doing comical things like re-ending police cars. They parallel park themselves when instructed to do so. Many of these are available or becoming available today while others are the promises of tomorrow. A car that can follow a GPS signal on a map to get you to work and can even dead reckon when that signal is lost isn’t that far-fetched. Obviously, all the safety features we have now and many more are required before that becomes real. But those fundamentals have been developed.

Murmuration

In the near future we’ll have cars that exhibit “flocking” behavior. Cars will drive in heavy traffic like a murmuration of starlings or a school of anchovies. Those flocks of birds or schools of fish move in coordinated and synchronized fashion. Some communications via sound or scent or group-mind must be at work, mustn’t it?

Interestingly, the flocking and schooling behavior is based on three simple rules and isn’t that telepathic after all. Automobiles programmed to follow those rules wouldn’t need to communicate with one another in order to follow the rules. In fact, new smart murmuration vehicles could hit the market and be switched on during rush hour traffic in the midst of a number of “dumb” cars that we have on the road today.

There are only three rules or principles to follow. Briefly those rules are about:

  • Alignment
  • Cohesion
  • Separation

When birds flock together closely they look to their neighbors and change their own alignment to closely match their neighbors. Cohesion means getting as close to the group as possible but no more, a bird doesn’t lag behind and move out but steers toward the center of the near-by group. Separation is the opposite of cohesion in that the bird or anchovy, while desiring cohesion also wants enough distance from each of its neighbors to ensure enough time to react to changes.

It isn’t difficult to recognize how we already flow in a school of automobiles with roughly similar rules. Our alignment is such that we stay in our lanes and switch from one into another.  Our cohesion means we stay close to the car in front of us to minimize the total distance taken by the entire group but we maintain enough separation to prevent accidents and to help prevent constant stop and start braking. Our alignment might also take into consideration keeping cars out of our blind spot and our separation keeps us prepared to brake or move lanes if someone moves into our lane.

It isn’t long before we have cars that do much of that for us. We already have cruise control – cars will go at a set speed but now we have cheap radar, laser and video sensors that can determine where every car in a 360-degree radius is located.  With throttle control, accurate sensors, powerful CPUs and electronic steering (already on many models of cars whether folks are aware of it or not), all the elements for flocking behavior are in place.

Like the passive and reactive improvements of the past, flocking behavior is going to be an important step toward self-driving automobiles. But it is still reactive. It can be put in place today if it is desirable.

But flocking alone isn’t self-driving any more than cruise control is. It still requires a driver at the wheel ready to take over when necessary.

  • “That’s my exit so I have to get over here.”
  • “That car up there half a mile ran out of gas in my lane, I should get over.”
  • “That person looks hurt, I should probably pull over there and see if I can help.”

Flocking is helpful for safety and comfort. Driving in rush hour traffic requires one to hold the wheel and keep an eye out, but the flocking can control the gas-brake-gas-brake action. Just put the car into flock mode, much like cruise control, and let it control all speed and even switch lanes in an emergency or honk the horn when there isn’t any other recourse. It’ll make life safer and less stressful in rush hour.

Yet it is still reactive.

Are We There Yet?

And so we rush forward to the self-driving car. We are putting all the building blocks in place – GPS driven maps, a multitude of passive and active safety features, self-parking, flocking, sensors in a wide range of modes from infrared to sonar to reflected laser and so on. Certainly this must mean that the self-driving car is right around the corner, doesn’t it?

Here we go again. The building blocks and technologies of an endeavor are pouring out so fast and precipitously and we understand so many answers to the problems at hand that it appears that as we stack more and more of these blocks on top of one another we’ll simply get there.

But history teaches us that when we’ve built that pyramid up far enough we’ll get to a point of existential crisis. Will it even be possible to get to where we want to go with the approaches we are taking?

End of the Road

While certain thorny technical problems lie ahead, we’re are on the same road that leads back to that Liberal Arts college. The psychologists and philosophers and lawyers are the ones who will be handed the problems created by the possibility of this new technology. Even if we had a car that leave your garage, take you on surface streets to the freeway, drive you through bumper to bumper rush hour traffic, get you to work and park in your spot, it wouldn’t be enough. And we’re nowhere near that point technically.

The problems are nuanced and insidious. If you are sitting in your self-driving car, perhaps reading a book or watching TV, and a child darts out in the road, what’s your car to do? If it can’t brake should it swerve? If it swerves into on-coming traffic does it gauge whether you’re likely to survive the accident? What if the other person isn’t likely to survive the head on collision? What are the probabilities that they might see you and swerve out of your way? If the car ‘decides’ that swerving into on-coming traffic is too problematic, does it now swerve the other way into the guard rail and light post or tree potentially killing you? Or, should it keep you safe and run over the child? Putting aside the moral decision making, that’s a lot of decisions to weigh – will the other car kill me, will I kill him, will he swerve? All those probabilities and only probabilities.

What if it’s a dog instead? Can the car accurately make the distinction reliably enough to make that decision? If you can avoid the dog but result in side swiping another vehicle in the next lane, is that acceptable? What if it is a blowing tumble weed?

As the occupant of the car, do I get to decide? What if another passenger disagrees with my priorities? After all, in a self-driving car we are all passengers. Is this a democracy or an autocracy?

The automaker actuaries, lawyers, insurers and ethical philosophers have to sit down and decide what set of rules should be set in place, how much money is required to cover death and dismemberment, and what the car will cost in order to cover it all. The geeks and engineers aren’t qualified or inclined to such decision making.

But should the automaker decide that you are the one who should die if a child darts into the road? Or should they be the ones to decide, to your horror and endless nightmare, to make you watch as the car decides to run the child over in order to protect you?

And what happens when the sophisticated rules engine and neural network makes a decision, one day, to accelerate someone’s car into the side of a building. Then it happens again. And again. In a highly complex system with so many intelligent systems and subsystems interacting, it may be that the automaker can’t find or determine the cause of the bug. Are the artificial neural networks inadequately identifying an object? Is it conveyed but resulting in bad rule firing or crashed subsystems?

After a few years, state and federal lawmakers will create new laws and every year they’ll change those laws due to social and technical changes. Philosophers will make varying ethical cases for the ranking of priorities, the Heritage Foundation, ACLU, NAACP and other special interest will push their agenda – and your life, health, insurance and car costs will hang in the balance.

It isn’t just life or limb decisions either. What if you and your wife are on the way to the hospital because she’s having a baby. You tell the car, “step on it!” But it dutifully obeys the traffic laws? Or perhaps it does as you say. And if these cars behave dutifully, you’ll have the guy who is perpetually late for work every day telling his car to “step on it!” Even if it doesn’t cause an accident, it might well create traffic problems because its owner informed it that this is an emergency.

Can owners program them their own way? Can they be sabotaged by pimple faced hackers or held by ransomware developers?

Do these new automated cars have manual override? What happens if you are in a country lane made of gravel or one with ruts or even out in the middle of a field with no road at all? Are you out of luck or are you going to drive by mouse and hope the car doesn’t get stuck? Can it recognize a slippery road, a muddy bog, loose gravel, or a pile of nails? You and I can. But we can also read the letters of a hand-written note with better than 99% accuracy.

If they have manual override, how many people will remember how to drive in a few years? Do you want someone driving who came of age after the self-driving car became real? Does it then require different types or classes of driver’s licenses?

The Uncanny Alley

Now we enter the Uncanny Alley. That wide gap between where we think we are, how rapidly we believe we are progressing, and ultimately, that place where the nexus of our humanity and technology raise questions not anticipated by our designs.

 

 

For Sam