The robot car of the future will be designed to kill you. Killing you will be routinely engineered into robot cars - a fundamental protocol. Who is going to be responsible when a robot car kills?
Updated 7 July 2016:
Humanity Records First Robot Car Death
Joshua Brown, 40, of Ohio USA was killed on May 7 in Williston, Florida, when his Tesla Model S, running on the company's 'Autopilot' system, failed to identify a semi-trailer that had crossed the road in the vehicle's path. According to the Levy County Journal, local police report that the top of the vehicle "was torn off by the force of the collision".
The US road safety regulator, the National Highway Traffic Safety Administration (NHTSA) has opened an enquiry into the incident.
Mr Brown, reportedly a professional technology enthusiast had previously posted videos online of his Tesla running on Autopilot online - one of which showing the feature in action as it avoided a collision with a truck. That video rapidly clocked up more than one million views after Tesla CEO Elon Musk tweeted it.
Tesla issued an extensive 537-word statement essentially distancing itself from Mr Brown's death. In that statement, Tesla said: "Autopilot is getting better all the time but it is not perfect and still requires the driver to remain alert."
The obvious question here: Why roll out a system so profoundly dangerous - even if the probability of death is remote - and call it 'Autopilot'? The name alone is an implied invitation to goof off. (To test this hypothesis, use the YouTube 'search' function and scroll through the first 20 results for the keywords 'Tesla Autopilot driver asleep'. Plenty of people are goofing off.
It appears, despite the clever positioning, Elon Musk's electric car company is playing a kind of Russian roulette with people's lives by rolling out unproved systems, and apparently doing the final R&D on public roads. Reading in between the lines of the public statement, Tesla appears to be prepared to shoulder none of the responsibility for the inevitable deaths that will flow from this practice.
The bigger question is: Are our regulators even beginning to grapple with the complex chain of moral and legal liability as robot cars inevitably start killing people?
Killer Robots Transcript
Can you think of another appliance that’s designed to kill you in certain circumstances? I’m not talking about accidental death. I mean a designed-in event cascade - digital decisions - with ‘kill you’ as the preferred choice at the end of the program. After years of successfully engineering cars to protect you better, automotive technocrats are today grappling with designing cars that drive themselves to kill you - they’re not talking about it, but that’s what they are doing. Tomorrow’s driverless car technology will - occasionally - be your judge, jury and executioner - in real time. Let’s see how we get from now to then.
The car industry is very busy pissing all over itself over cars that drive temselves. So exciting. They’re all losing control of their bladders. The driverless car technology is there, and it’s quite sexy, but the ethics and morality are not - not there, and not sexy. And you don’t actually have to wait until some day in the future to have driverless car technology riding shotgun with you, as you drive. You can have that right now. That driverless car technology is being embraced by thousands of drivers - affordably - today. And it’s not killing anyone - in fact, it’s saving thousands of lives. And it needs to - according to the World Health Organisation, cars killed 1.24 million people in 2010 … one person every 25 seconds, nonstop. The bleak cost of all that mobility. So before we get to the killer robot cars of the future, let’s look at the lifesaving driverless car technology.
Semi-autonomous Safety Systems Today
I recently went to the Western Sydney International Dragway, and with the the help of Subaru Australia, tested Subaru's EyeSight Safety technology, under controlled conditions. Subaru’s EyeSight safety system identified the problem, every time. Brilliant. Then it executed a textbook emergency stop. Pretty clever technology. And bozos like that pedestrian in the test (me) more fixated on his mobile phone than the real world, they get saved by engineers all the time. If that wasn’t a setup, he’d go home to his family never knowing that EyeSight’s advanced autonomous driving technology did all the heavy lifting, snatching his life back from the jaws of the Grim Reaper.
THE TRACK TEST - SUBARU EYESIGHT AUTONOMOUS EMERGENCY BRAKING
Subaru’s EyeSight system - you might think of it as the foundations of a fully autonomous car - uses twin stereoscopic cameras up high in the windscreen to see the world ahead, but EyeSight’s real technical sophistication - the heavy lifting - is hidden in the black boxes - the computer systems and the code that processes the images and continually assesses the driving environment for threats. Humans, well, we’re fallible. We get tired, angry, distracted. We make bad choices. Computers just don’t goof off. I caught up with Nick Senior, the Managing Director of Subaru Australia, for more.
NICK SENIOR MASTER INTERVIEW
"The key thing about EyeSight is to try to prevent crashes. The most common one is nose-to-tail crashes. So, with EyeSight it senses, and then it can pre-empt braking to try to stop completely - and if not, to minimise [the crash]. The other one that's quite common is drifting out of your lane, particularly on freeways - and I think probably in Australia that's a fairly important on given the long distances and the high incidence of driver fatigue.
"The third [major function] is adaptive cruise control where the system will maintain the distance from the car in front of you.
"This is not intended to replace the driver. It's the driver's responsibility to obey [the rules], to be sharp and alert. But the second thing with EyeSight: It has not just come onto the radar (excuse the pun) in the past three or four years. EyeSight has been 20 years in the making and the testing. Indeed, if you drove around Sydney today, you would see some engineers with fourth generation EyeSight, testing that.
EyeSight functionality. Above, left to right: 1. Autonomous emergency braking in action. 2. Adaptive cruise control, driver's eye view. 3. Adaptive cruise control proximity display. 4. Lane departure warning system activated.
"First half of the year  43 per cent of the vehicles that we sold - roughly 21,000 vehicles - had EyeSight. We would love to get EyeSight in every vehicle by the end of this decade. But there is a bit of a problem at the moment because of the success of EyeSight around the world: just getting the equipment. The factories are basically running flat-out at the moment.
"There's not a car with EyeSight as an option. We either have it, or it's not available ex-factory at the moment to fit. But it's technology [costing] around $1500 per car.
"We've attracted customers [with EyeSight. They've admitted that they are paying more, because it's the right decision to put their employees in cars fitted with EyeSight. We have a fleet customer with 90 cars who previously had an accident rate of between 20 and 40 per cent per annum. Since Eyesight - at the moment - their accident rate is zero per cent. They're the sort of figures, over the short term, but I think also over the longer term, will be able to demonstrate the benefits of EyeSight. Not so good for the parts business, but we'll cop that." - Nick Senior, Managing Director, Subaru Australia
The twisted irony there is: There’s no reporting system for any death that’s avoided. We’ll never know, collectively, how many lives this kind of autonomous technology will save. The car you drive today is a rolling computer network. Dozens of black boxes all wired up together through the CAN bus - the controller area network. Just like your network at home, only wired.
If cars are that connected, if they can see for themselves, and steer for themselves, and if they can brake for themselves … well, philosophically, Skynet’s really just around the corner. And that means cars will soon take one small step for the car, but a giant leap for humanity. And that’s when driving starts to look like James Cameron and Quentin Tarantino collaborated on the script.
Tesla made headlines when it downloaded autonomous driving capabilities to existing Models S cars while the owners slept - but check out what Hyundai built into the Genesis saloon flagship, months before Teslas got half smart.
HYUNDAI GENESIS EMPTY CAR CONVOY
Above: screen shots from the advertisement (watch it in the video at the top of the page). Essentially, five Genesis saloons are on a test track at Hyundai's US proving ground. All stunt drivers except the lead driver invoke autonomous control and jump out, onto a flatbed truck. The lead driver then dons a blindfold and goes entirely 'hands off' (and 'feet off'). The flatbed overtakes the convoy, pulls in front, and performs an emergency stop. All the Genesis saloons follow suit. None crash. Applause... It's very impressive.
- Hyundai has robotic reverse parking tech here today, too. Santa Fe robot reverse park test >>
The technology is very clever. And it’s here. Clearly. Stunt drivers and a clever convoy on a proving ground - some brilliant storytelling and compelling cinematography - that’s one thing, but have you ever stopped to consider the enormous ethical dimension to giving that convoy the official imprimatur to operate on a road near you? When a robot car kills, and they will, who is actually responsible? And - in impossible morally conflicted situations where there’s a choice about who lives and who dies - how do you choose who dies? What’s the robot roadmap of human life and death?
The Dark Side of Mobility
Transportation kills people. There’s never been a benign transport system. Planes, trains, automobiles - ships. Everything from riding horses to the space shuttle has killed people. There’s a dark side to transportation. It’s feedback. Google’s robot car fucked up the other day - monumentally - it crashed into a bus. This was in February 2016.
Google said it bears (quote) “some responsibility” for the crash in Mountain View, California. Nobody was hurt, thankfully, but in this year, which marks the 120th anniversary of modern road death, we’re looking at a little piece of history right there in Mountain View.
How long do you think it will be before a robot car actually kills someone?
Is a Robot a Driver?
On February 4, the US National Highway Traffic Safety Administration said the artificial intelligence system piloting the Google car could be considered a driver under US Federal Law. That’s a big step forward for getting these things on the roads.
Chris Urmson, director of the Google self-driving car project, will testify before Congress in the US this month - on efforts to develop safe and effective autonomous cars.
Executives from GM, Delphi and Lyft Inc. will also spruik the technology. And it’s so sexy - if you get off on tech.
The Ethics of Life & Death
But how do we decide who lives and who dies? Because people are going to die. What’s the moral landscape, when someone’s got to go, and there’s a choice apropos of who goes? Nobody ever talks about that, and yet it’s a key foundation of the technology. Car companies won’t even acknowledge it. Here’s an example: a little kid steps out. No warning. There’s just not enough room to stop, without killing the kid. Physics. Energy. Grip. Decceleration. Computer takes a snapshot of the prevailing alternatives. Perhaps they are: Jam on the brakes; but kill the kid. Or: swerve. Avoid the kid, but you get cleaned up by an oncoming truck. There’s no win-win scenario behind door number three. Who dies. Then take this challenge: Sell that decision to the parents of the kid, or your widow and bereaved children. Good luck with that.
Who Decides Who Lives & Who Dies?
Do we leave this decision to the automakers? Would you rather buy the car programmed for altruism (the kid lives; you die) or do you buy the car designed to protect you at all costs to others? Maybe that could be a unique selling proposition - the new Audi A8: designed to kill everyone else first. It’s hard to sell pedestrian protection, today, to the average car buyer. They’re just not interested. Carmakers do market research on that crap.
Imagine trying to get that philosophically uninterested buyer, who (commercially) doesn’t give a shit about pedestrian protection, to sign the contract with fine print outlining the terms and conditions under which a robot car will decide to go 100 per cent Swarzenegger on you, the owner. Hasta Lavista, baby. And good luck selling that.
How about if there’s five of you in the car, and only one little kid stepping out? Or four and two? Is this cold, hard calculus in part a numbers game? What about if the person who steps out is a 75-year-old, possibly with incipient dementia, only a couple of years to go - and you’re a 40-year old with a family to support? Is remaining likely lifespan a valid consideration?
What about right and wrong - the road rules? Do rule-breakers deserve to die first? Because then, the kid goes. Every time. Is illegal behaviour actually part of life or death robot car crash calculus? I honestly don’t know. Pedestrian crosses the road illegally, maybe drunk: is that different to stepping out legally and stone-cold sober? Is one of those pedestrians more deserving of death?
You don’t get a death penalty for mass murder in Australia.
Human Brain -Vs- Robot Brain
It’s a brave new world, huh? When a human crashes a car and kills someone, logic’s usually not a factor. Human reactions in highly stressed situations, on tightly compressed timelines, are wholly instinctive. A deep structure in the human brain takes over - your amygdala - it’s designed to deal with that shit. It takes logic offline. Floods you with noradrenaline and cortisol. It jump cuts through reality. Sidelines rational thought. Evolutionary biology in action. It’s a bad feeling.
Your amygdala goes off the chain whenever there’s a credible threat - it’s a survival mechanism.
The event itself is decompiled much later, you have to play it back, right? And official blame is post-processed by investigators. Robots don’t do that - they use logic and warp-speed computing power to react.
We wing it; they don’t.
With robots, it’s always a rational choice. And that choice is determined by a programmer. If this and this and this, then that person dies. Three million times a second. It’s a lot of responsibility for a programmer. Where’s the code of conduct?
Will Robot Driving Be Safer?
To be fair, robot cars will make driving safer. Robots can’t goof off, they don’t drive negligently. Robot cars intentionally killing people is an example of feedback. But it will happen. And when it does, the media will lap it up - killer cars - tell me that’s not a tabloid TV story. The fact that the roads will be, overall, safer, won’t get a run.
I am not an ethicist. I’m an engineer. So I don’t know what the answer is. I can’t derive the killer car calculus. I don’t know how you decide who lives and who dies. But I do know one thing: if we leave this to carmakers and governments - they will fuck it up. And they will fuck it up monumentally. Would you trust Volkswagen to get this right? Or Takata? Or GM?
Would you even trust them to come clean when they fuck it up and fix it pro-actively? Or do you think they might instead suppress it to the extent possible and apologise only after their pants are around their ankles and the house lights have come up? We got caught - we’re sorry now. Because history demonstrates that’s exactly what they do. I’d be fascinated to hear your thoughts on the moral landscape of our robot driving future. Leave a comment below.