This article is part of Wevolver's research for the 2020 Autonomous Vehicle Technology Report and focuses on the American perspective for reasons of data: both the largest number of driverless cars, operators, and reports are in the United States. It is also a useful case study as a car-centric culture. Major safety issues not discussed for brevity include hacking vehicles, liability, human interactions with autonomous vehicles[3,4], and infrastructure. Reference material on these topics is available as cited.
This graph shows how far we have come in terms of road safety in the U.S. from the days when unlicensed 14-year-olds were legally driving for hire in cities with no lanes or traffic lights, to the modern safety standards enjoyed today. Many think this graph still has room to inverse thanks to so-called “driverless” cars, because 94% of road accidents are caused by driver error. The nearly forty thousand dying in road accidents every year in the U.S. alone feel as unnecessary as those before seatbelts or airbags became mandatory.
The first person killed by a driverless car on a public road died in Arizona in March 2018, but driverless cars are operating nearby today. How safe is this rapidly spreading technology? We say “self-driving technology,” but what we are actually referring to is a collection of technologies thrown together to deliver a car capable of driving itself, as seen in the graph below. What this graph is missing however, is the software, which can be very complicated, as illustrated by the small bar graph below that.
Software, or how a machine thinks, is arguably the biggest challenge. Comparing the code in a 2016 luxury vehicle to the machine learning behind today’s autonomous vehicles is unfair, since they work a little differently. The push to develop artificial intelligence has been driven at least in part by how much more efficient it is for coding behavior at large scale. Previously, there had to be a specific line or lines of code for every possible interaction, and the machine just would not work when it encountered something unexpected–until someone could patch the software with more lines of code to account for the conditions.
By contrast, machine learning gives–after a good deal of training–digital logic to learn from information and decide what to do without a specific instruction for each specific situation that could ever happen. That does require a lot more computing power, observational data, and mathematically refined models to pull off, so other metrics show us just how much more is required to run these new autonomous road vehicle operating systems compared to the Advanced Driver Assistance Systems in luxury vehicles of the last few years. Both rely on data from sensors to tell you if you are getting too close to an object, but only autonomous driving systems are analyzing and computing data from various sensors and maps to the degree they’re be able to control the vehicle completely without supervision.
That is a lot more complexity than we humans are used to designing into our vehicles. And yes, while several sensing systems have difficulty in various conditions, and localization accuracy does have room for improvement, as does handling of mapping data, the driverless car that killed a pedestrian in 2018 saw her long enough to reclassify her three times before hitting her.
We would be forgiven for thinking an autonomous vehicle should have thought to stop before hitting a person. Current software systems do have trouble identifying cues in our communication like gestures and eye movement. They are best adapted to structured systems, but unstructured events on the road, like an ambulance speeding against traffic or malfunctioning signal, can be difficult for current systems to understand quickly.
While all this is true, that car that killed a pedestrian in 2018 had safety functions like its emergency braking system disabled, possibly so it could act more like a human driver, without sudden, erratic stops. It could be argued that the system was handicapped before failure, by people. That’s exactly what the National Transportation Safety Board (NTSB), which analyzes transportation accidents to inform safer transportation practices, says in its preliminary report and hearings on this accident. The NTSB found fault in nearly every human involved. From Arizona highway policy, to the human failsafe in the car distracted by their phone, to the pedestrian with methamphetamines in their system walking outside the crosswalk, not to mention a slew of entirely factual criticism for Uber, which was found to be at fault, but not prosecuted in court:
Uber did not have a formal safety plan in place at the time of the crash.
The board also found that Uber’s autonomous vehicles were not properly programmed to react to pedestrians crossing the street outside of designated crosswalks. Moreover, Uber revealed to the board that its self-driving test vehicles had been involved in over three dozen crashes prior to the fatal one in Tempe.
Yet the pressure for safety on autonomous road vehicles is even greater than on human drivers. People have trouble trusting machines and are quick to lose confidence in them. In a 2016 study, people were found to forgive a human advisor, but stop trusting a computer advisor–for the same, single mistake.
Improving road safety to where we are today took decades, but it looks like we are moving a lot faster than we used to. Legislators in 36 States in the U.S. and Congress have already enacted laws on driverless cars, and the U.S. Department of Transportation’s Comprehensive Management Plan for Automated Vehicle Initiatives includes over $120 million in targeted research and technology development funding, informed by frequent engagement with stakeholders outside the public sector.
Companies have banded together too, as in the Automated Vehicle Safety Consortium, to jointly develop safety best practices at each level of autonomy. Its first set of recommendations–perhaps unsurprisingly given its impressively broad yet entirely corporate makeup, including Uber–focuses mostly on “fixing” the failsafe drivers and the general public, but more will be tackled in time.
Even the United Nations has released a "preliminary framework” for the globalization of this technology. Although it is just a series of topics and working principles at this point, half of them on aspects of safety, it has been criticized for missing a few important elements of this technology’s global proliferation, and loopholes in its definition of safety sound like it might let autonomous vehicles “get away” with anything short of bodily harm. This will likely be revised as more countries become concerned about self-driving cars in their neighborhoods.
At the working level, real progress is being made through industry collaboration, like the “Safety First for Automated Driving” report. The 150+ pages authored by eleven companies, like Intel and BMW, begins with the world’s laws and regulations, moves on to verification and validation of component systems, cybersecurity, and a breakdown of all the different elements and steps, physical and digital, as well as case studies on how to make a car that drives itself. It is entirely non-binding, and the authors freely admit how much they simply do not know yet, especially about the challenging problems that they have just not figured out yet.
Big, car-centric cities in the United States are thinking about this, too. Los Angeles’s Department of Transportation’s Strategic Implementation Plan, for example, devotes much of itself to the integration of autonomous vehicles via strict control, including APIs meant to be enforced on all autonomous vehicles operating in Los Angeles, whether ground or aerial. Ambitious as this is, while we are much better than we once were at adapting to technological change, technology is probably advancing more rapidly than our conceptual understanding of what we do with it.
Google’s Waymo, which is already selling rides in its autonomous taxis inside a geofenced area in Phoenix, AZ, USA, uses “pathological situations,” like people jumping out of bags dressed as Elmo in front of the car, to prepare its systems for the unexpected. Today, there are not only driverless rides for sale in Arizona, but startups filling niche markets and generating revenue.
Markets and morality are already converging. It seems now only a matter of time, as these technologies continue to improve and as regulators keep pace, before driverless cars can operate in increasingly complex scenarios compared to the well-ordered environment of newer cities in the American Southwest.
The speed of the development and deployment of this technology cannot be understated. The Defense Advanced Research Projects Agency (DARPA) held an Autonomous Vehicle Grand Challenge in 2004. “Every vehicle in that first Grand Challenge in 2004 crashed, failed, or caught fire” and 15 years later there are autonomous taxis for hire.
The race to autonomous platforms is happening with every other kind of vehicle, too. These autonomous cars, trucks, boats, planes, drones, etc. will interact with each other and humans. This only amplifies our already heightened need for trust in vehicles that do not need a human to go from point A to point B. The term “Trusted Autonomy” was first coined to describe how humans interact with each other. It now extends to machines and digital systems.
Few understand the connection between the technology that lets a car drive itself and the technology that lets planes, ships, and spacecraft ‘drive’ themselves better than the military, because they work in each of these domains every day. The Defense Advanced Research Projects Agency (DARPA) has been running an Assured Autonomy program to develop trusted autonomy through the design, development, and deployment of land, sea, air, and space systems. DARPA is investigating tools like mathematical models that quantify risk and uncertainty in algorithms during development, and safety kernel methods to provide assurance in operations. The same autonomy technology story playing out in the automobile sector should be expected in every other.
Now the question of “safety” must be expanded. Driverless cars could replace the largest easily trained labor market–what will those people do? Are they safer without income because road fatalities are lower? If we ignore the economic security of those left behind, then we could have safer roads, but may end up with a less secure society. The consulting firm McKinsey even asserted as early as 2016 that existing technology could replace up to 45% of jobs. Even if the economy will create new jobs fast enough, will we retrain enough people quickly enough to maintain our way of life?
Or does this technology, as it spreads across far more than transportation, open up possibilities for new ways to live? There may be more than one answer as we choose–or perhaps allow–this suite of technologies we call “autonomy,” its users, and applications to be controlled. Making autonomous road vehicles more trustworthy seems only a matter of time; how much we trust each other,
a matter of choice
Continue learning? Complement this article with our 2020 Autonomous Vehicle Technology Report, to which Jordan Sotudeh has been an important contributor.
Tagged withautonomous vehicles