The imminent autonomous vehicle panic
When you lose control of your car, an ancient piece of lizard brain, buried deep beneath layers of logic, comes alive. I was recently traversing a snowy pass, on the way out of Queenstown, New Zealand when I experienced this first-hand. I wasn't moving fast, but a patch of ice caused the car to switch from its usual ‘going in a straight line’ mode to an unexpectedly inventive ‘swerving frictionlessly into oncoming traffic’ mode. The wheels gripped dry ground within a few seconds, but in that time a surge of adrenalin and fear seeped into my stomach and legs and arms. My fingers dug trenches in the leather of the steering wheel.
I wasn't in any real danger during those few seconds. There were no incoming cars, and the cars in front of and behind me were far away. But the cocktail of fear and panic in the pit of my guts begged to differ.
We’re each embedded with a valuable intuition: fear rises proportional to our powerlessness. Ask someone strapped to an economy-class piece of foam, in combat with gravity and wind trying to land safely on tarmac, and they’ll agree, with wide eyes, despite the demonstrably low risk of air travel.
Control has a big impact on how we perceive risk. It’s something I've seen first hand, with regards to wind energy built in rural Australian communities. I strongly suspect the introduction of autonomous vehicles is going to inspire a reaction that closely mirrors this mis-perception of risk.
I wonder how nervous we’ll be made by autonomous vehicles. Google’s promo video, to the left, is bright, saturated and cheerful, but some UK research suggests the wider population might harbour some nervousness about the concept:
“Almost half of all respondents (48%) said they will not consider purchasing a driverless vehicle in the future, with a third (33%) saying they will not even be a passenger in such a vehicle.
Drivers, on the whole, are sceptical about the technology; there is a perception that computers are less trustworthy than humans. For instance, 80% of motorists believe driverless cars should include controls such as a steering wheel to allow passengers to take control if necessary. By contrast, just 6% do not see the need for this.
This doesn't seem to have much to do with marketing, or the company behind the technology. In casual conversation, I've heard a similar mix of views. Some people actively yearn for a robot car, but most seem shaken by the suggestion. “Nah, mate”, said the youthful bartender at the Bearded Tit down the road from me, “I wouldn't like it at all. Just don’t know what it’s going to do, you know”
There are plenty of novel ethical quandaries spawned by allowing a robot to propel a deadly metal box filled with meaty humans down a highway. Conversely, humans are responsible for the vast majority of motor vehicle deaths and injuries. In 2010 in Australia, there were 1,248 fatalities associated with motor vehicles — 75% to 90% of these were caused by human error.
We deny the reality of risks that we perceive we can control, despite our influence sometimes making these things riskier. Conversely, when handing control to a computer makes something safer, we’re likely to judge it as riskier.
“ In addition to basic safety concerns, 80 per cent thought that such vehicles should have a steering wheel included, in case the car’s occupants need to take control”
Like many technologies before it, the fear of autonomous vehicles will manifest not as an openly declared reaction to powerlessness, but through a variety of proxies. Similarly, discourse and media reporting will follow a surprisingly predictable pattern. Autonomous cars will be subject to a variety of predictable phenomena. Let’s dig into how we’ll react to the imminent autonomous vehicle techno-panic.
A focus on single events
Alan steps out of his apartment block, and heads down the street. He flicks through the headlines on his phone. Every site has led with the same story:
ANOTHER TRAGEDY: MOTHER OF 2 KILLED IN ROBOT CARNAGE
DRIVERLESS DISASTER IN SYDNEY AS ANOTHER LIFE TAKEN
THE WHEELS OF DEATH CANNOT BE STOPPED
Surprisingly, media headlines that focus heavily on single tragedies are already emerging around this technology. Google has begun posting monthly report/s of tests of the car’s safety, including the video above showing the vehicle being rear-ended due to human error. The headlines illustrate this phenomenon in embryonic form:
It’s within these blurred boundaries of statistical nuance that these headlines find their stride — fear sells, and the creation and maintenance of a panic around driverless technology will certainly be lucrative for organisations that thrive on the
We can see this around electric car technology — three minor fires in electric vehicles were widely reported, and Tesla’s stock fluctuated wildly in response. At the same time, 184,500 petrol car fires went unreported.
This ties into what’s called the ‘Availability Heuristic’ — a mental shortcut we use to determine the importance of stuff. If something is easily recalled, or prominently discussed, it’s important. Terrorist attacks, for instance, garner a large amount of news coverage, and we consequently over-estimate the risk of these events. In the months after 9/11, road deaths in America increased by ~1,595 people, due partly to people avoiding flights after the highly-publicised disaster.
Anecdotal evidence
You’re sitting around the dinner table, and your friends brought some of their friends. You haven’t met them before, but they arrived in a car with a steering wheel, which is weird. You ask about it, out of curiosity.
“I noticed you've got an old-model steering-wheel car”
“Yep, I refuse to get into those robot meat-grinders”
“Can I ask why?”
“Look, my colleague’s brother says he got into one, and it just went totally haywire, and started swerving around, and they-”
Stories are powerful, and they are powerful tools for buttressing views that are either have no evidence, or directly contradict evidence. They remain the dominant form of passing on wisdom, but they’re profoundly clumsy.
That anecdotal evidence is subject to a large array of biases doesn't take away from their popularity, nor will it ever reduce the capacity for people in positions of power to propagate them instead of scientific evidence. “Are you saying this person’s a liar?” they’ll shoot back, when challenged. An contemporary example is below — Michelle Bachmann, an American politician:
Though vaccination is a safe technology, Bachmann is compelled to communicate her opposition to it through the invocation of a story. It’s ludicrous, but it works, and it’ll emerge around autonomous cars, for sure.
Over-regulation
The Senator stands up in parliament. “Ladies and gentlemen”, he declares, “I am going to tell you a story about Anna, whose life was ruined by a dangerous driverless death-trap”. His long speech concludes with the tabling of his signature policy — ‘Anna’s law’ — banning the sale of driverless cars. When finishes his speech, the house rises to their feet, and they collectively applaud, as the live stream beams the vision out to the public.
Reactionary regulation is a common occurrence, and given the nature of driverless cars, and our innate reaction to losing control, I strongly suspect this technology will be a prime candidate for reactionary policies.
There’s a sizeable collection of examples to choose from. A response to a few high-profile ‘one-punch’ crimes in NSW was the introduction of mandatory sentencing. A collection of Queensland backbenchers call regularly for the removal of fluoride from the water supply (they think it’s ‘toxic’). Western Australia created a policy designed to bait and kill sharks — a clumsy response to a series of shark attacks on the Western Australian coastline (the policy was labelled ‘stupid’ by a shark attack survivor).
Glenn Aitken, Frankston City Deputy Mayor, demands in the video below that a smart meter be removed from the home of a resident complaining of health impacts:
The pattern is simple: the magnitude of a risk is misperceived, individuals and media outlets react, and politicians demand greater regulation.
The allure of this simulation of political action breaks through ideological barriers. Consider Australia’s most prominent libertarian — Senator David Leyonhjelm — who is, unsurprisingly, fighting against tobacco and gun regulations, but also calling for a new government body to regulate ‘infrasound’ from wind turbines.
The temptation to capitalise on a constellation of health fears is so great that unapologetic libertarian parliamentarians call for the immediate creation of government regulation and oversight.
Pockets of over-reaction to autonomous vehicles won’t result in the death of the industry, but they’ll certainly be a setback. Earnest politicians, reciting anecdotes and clasping front-page headlines in front of a bank of cameras will call for decisive, swift action, but it’ll pass, hopefully.
Tech companies are pushed into the world of human perception with increasing regularity. Spotify’s music-selection algorithm, used to select random tracks, had to be changed recently, due to the fact people saw patterns in the noise. It would frequently pick tracks by the same artist several times in a row, or it’d give you three jazz songs, then three metal songs. This is how true randomness works: sometimes, it clusters in what we might perceive as patterns. Despite this, Spotify had to change their formula to suit our expectations of randomness.
I suspect car manufacturers are going to have to reach a similar compromise, based on our own perceptual pitfalls. Companies are keenly aware of the need to take an incremental approach, and expose the public to the technology gradually. A Volkswagen engineer says that:
“A lot of this is getting people comfortable with the technology, showing people a benefit. The idea is the driver is always in control — the vehicle is there to help you.”
Business Insider predicts 10 million autonomous vehicles on the roads by 2020. There are predictions of a close relationship between this technology and on-demand services, traffic optimisation and big reduction in complexity. This creates a tricky conundrum: if the rate of technological change is fast, will people have time to ‘get comfortable’ with the technology?
As machines change, the human brain stays pretty much the same, and so our reactions are likely to play out the way they have in the past. We can fend off media distortion by demanding representative and level-headed coverage. We can consider anecdotes and stories, but never in the place of scientific evidence. And we can call politicians out when they inevitably decide to take advantage of health fears around technology. We can recognise when we’re collectively panicking, and stop, and breathe.
The collection of instincts, biases and heuristics we carry around are mostly useful, but they misfire with increasing frequency, as the world changes quickly, and we adapt slowly. If we trim these misfires from the prominent position they currently hold, we can enjoy the benefits of new machines in a much more considered and thoughtful way. We shouldn’t throw caution to the wind. We should just disable its current capacity to override scientific inquiry.
I hope we can, with autonomous vehicles — it would be tragic to see the technology hit a wall, due simply to our fear of losing control.