Is Waymo’s driverless car safe?

Written by cfmccormick | Published 2017/10/21
Tech Story Tags: self-driving-cars | waymo | google | safety | autonomous-cars

TLDRvia the TL;DR App

A new safety report and consumer education campaign from Waymo are full of reassurances, but they leave many questions unanswered.

Driverless cars have enormous potential to save lives and improve traffic safety. But there’s a long way to go before that vision is realized. Along the way, champions of autonomous vehicles are going to have to convince a lot of skeptics that the technology can work in the real world, not just under controlled testing conditions. Are they going to succeed?

Definitely a trace of doubt in my mind

The list of skeptics is long. A recent Pew Center poll found that over half of Americans wouldn’t want to ride in an autonomous vehicle because they don’t trust the technology. As one respondent put it: “What happens if the vehicle malfunctions? So we all just crash and die I guess?” Most people are profoundly uncomfortable with the idea of machines making safety-critical decisions, and examples of non-safety-critical automation such as ATMs or automated call centers don’t make them any more comfortable.

As the company with the longest (although not the most expensive) track record in developing AVs, Waymo is sharply aware of these concerns. This month it launched an effort to communicate with consumers about the technology, called Let’s Talk Self-Driving. Unfortunately — and surprisingly for a company with Waymo’s resources — the campaign’s website is far from impressive. It’s little more than statements from various organizations that they “support self-driving technology”. And while these partners, including MADD and the National Safety Council, are respectable, their endorsements are remarkably ambiguous — the most that NSC will say is “Even as self-driving technology is tested and perfected, the National Safety Council is committed to making today’s roads safer”. Let’s talk half-hearted support!

Most people are profoundly uncomfortable with the idea of machines making safety-critical decisions, and examples of non-safety-critical automation such as ATMs or automated call centers don’t make them any more comfortable.

Another important skeptic of self-driving technology is the US Congress. Questions about safety dominated Congressional hearings earlier this year, and the first personal experience with an autonomous vehicle by a U.S. Senator didn’t go very well. (“They said ‘Trust the vehicle’, and as we approached the concrete wall, my instincts could not resist, and I grabbed the wheel, touched the brake, and took over manual control” said Sen. Ben Nelson.) Unsurprisingly, ensuring safety is a key part of legislation currently under consideration.

Waymo is certainly aware of this concern among policymakers, and the Safety Report that it released earlier this month seems partly aimed at addressing them. Sections in the report are strikingly similar to the ones that would be required in “safety evaluation reports” if legislation being debated in the US Senate passes. The report contains many interesting items which I’ll detail below, but at a high level it raises as many new questions as it answers. And despite the fact that it’s clearly a marketing document (as David Silver notes), it seems just as unlikely to convince skeptical policymakers as the Let’s Talk Self-Driving campaign is likely to convince a skeptical public.

Five flavors of safety

So what’s in the report? It starts with some sobering statistics: there are 1.2 million deaths worldwide from vehicles per year (37,000 in the US), and the economic harm from loss of life and injury relating to vehicles is almost $600 billion a year. These are appalling numbers, but hardly new to anyone paying attention to this field.

The report goes on to spell out the five areas of Waymo’s safety program. The first area is called “Behavioral Safety”, and it’s probably what most people think of when considering whether AVs are safe. It’s the issue of whether the AV drives safely, by correctly reading the road situation (including perceiving other cars and pedestrians) and making good decisions about how to drive in those conditions. Waymo seems to be trying to reassure readers about this by describing how its technology works, but the description is far beyond non-technical readers, and far too simple for technical ones. Ultimately, behavioral safety is intimately linked to the concept of Operational Design Domain (ODD), but we’ll come back to that.

There are 1.2 million deaths worldwide from vehicles per year (37,000 in the US), and the economic harm from loss of life and injury relating to vehicles is almost $600 billion a year. These are appalling numbers, but hardly new to anyone paying attention to this field.

The second area is “Functional Safety”, which is about subsystem redundancy. This isn’t a very sexy topic, and probably doesn’t occur to anyone without an engineering mindset, but it’s important. If an AV’s on-board computer crashes, there may not be time for it to reboot before something bad happens. Similarly, if the forward-facing camera or lidar fails, the vehicle may not be able to use its remaining sensors to safely navigate whatever the road conditions are at that moment.

The report says that all of Waymo’s AVs are equipped with a secondary computer that can operate the vehicle if the primary one fails. That’s a remarkable fact: even if the secondary computer is less capable than the primary (i.e. all it can do is safely maneuver the vehicle to a stopping point out of traffic) it’ll still add significant costs to the overall vehicle. Imagine buying a backup laptop for everyone in your company, just in case the primary fails — you definitely wouldn’t be popular with the CFO.

Similarly, there are redundant power systems on every vehicle. How exactly these work is unclear, but it’s probably some form of battery backup and power electronics sufficient to get the vehicle to a safe stop if the primary fails. But AVs use a lot of power, so this backup is probably a lot more complicated and expensive than a 9V battery.

Imagine buying a backup laptop for everyone in your company, just in case the primary fails — you definitely wouldn’t be popular with the CFO.

The third safety area is “Crash Safety”. Waymo seems to treat this as basically the same as physical crash safety for non-AVs: a question of air bags, crumple zones, seat belts, etc. That’s a little too simplistic, since AVs will have a lot more equipment and complexity than conventional vehicles. What happens to the computer in the trunk or the sensors on the roof during a crash? Do they pose additional safety risks to the passengers? We’re not enlightened on this issue by the report.

The report notes that the base vehicles Waymo is using — 2017 Chrysler Pacifica Hybrid Minvans — are certified by the manufacturer (FCA) to comply with all federal vehicle safety standards. Waymo is clearly asserting a distinction between its area of responsibility (the additional equipment and software to turn a conventional vehicle into an autonomous one) and that of the vehicle manufacturer. Whether this will be legally defensible remains to be seen. After all, current OEMs aren’t off the hook financially if a parts supplier declares bankruptcy — similar logic may ultimately apply to AVs.

There’s also the question of post-crash behavior of the vehicle. Are there any procedures that an AV should be required to conduct after a collision, such as preserving data? What about notifying local law enforcement and emergency services, if the vehicle still has data connectivity? Waymo is currently working with some law enforcement and first responder agencies to train them in handling AVs after a crash, but this is only a start.

We do learn in a later section of the report that in the event of a crash, Waymo’s human-operated control center would contact the passengers via voice communications, assuming they still work. Waymo would also send a team of its own personnel to the scene of a crash. This is kind of amazing to think about — it’s as if Ford dispatched a team to the site of an F150 hitting a sedan. I’m not even sure that’s legal, and it’s certainly not something that is currently recommended for people involved in a collision. There are many issues that need to be resolved around this topic, but aren’t.

Waymo is clearly asserting a distinction between its area of responsibility (the additional equipment and software to turn a conventional vehicle into an autonomous one) and that of the vehicle manufacturer. Whether this will be legally defensible remains to be seen.

Area number four is “Operational Safety”, which is focused on the interaction between the rider and the vehicle. It doesn’t get much attention in the remainder of the report, so I won’t dwell on it here.

The fifth and final area is “Non-collision safety”, which seems to be about ensuring that the electrical system and sensors don’t pose any direct hazard to technicians, passengers, first responders, and bystanders. This also doesn’t get much attention in the rest of the report, but there are lots of issues that should be discussed. For instance, how should fire and rescue crews handle self-driving cars, which are loaded with extra sensor and computing equipment? Similarly, would active sensors such as lidar or radar cause interference with each other if multiple autonomous vehicles are driving in the same area?

There’s no escaping from the ODD

Weather conditions are one important dimension of the Operational Design Domain for AVs. Image source: Waymo Safety Report.

After articulating these five areas, the report returns to the concept of the Operational Design Domain (ODD). It notes that Waymo AVs are geo-fenced, meaning the autonomous features will only operate in certain areas. This makes sense, since it’s the easiest aspect of the ODD to actually implement and enforce.

Interestingly, the report notes that speed is one of the dimensions of the ODD. Presumably this means that the autonomous features won’t operate if the vehicle is going too slow or too fast — although neither of those situations should ever occur if the autonomous driving system has been operating effectively. (This may be about restricting drivers from switching over to autonomous mode if they’re already speeding.)

Of course, the biggest challenge of the ODD (as I’ve discussed in a previous post) is incorporating human behavior. For SAE Level 2 and 3 vehicles, humans are required to be paying enough attention to the road that they could take over driving on short notice, if the AV system is unable to handle driving tasks. This is an enormously complex and dangerous requirement, and it’s largely outside of the control of the vehicle designers and manufacturers.

No vehicle is “fully self-driving” if it ever allows humans to have direct control of driving, such as navigating geographic areas far outside of known, mapped locations.

Waymo explicitly points out these problems, and their early experience with what they call “advanced driver-assist technologies” that make up Level 2 and 3 AVs. That experience taught them how difficult the “handoff problem” is, and led to Waymo’s decision to pursue only “fully self-driving” vehicles. Of course, there are lots of questions that assertion raises. No vehicle is “fully self-driving” if it ever allows humans to have direct control of driving, such as navigating geographic areas far outside of known, mapped locations.

The report describes the Waymo vehicle as Level 4, such that it can perform all driving tasks within a defined geographic area (i.e. geo-fencing) and under defined conditions (e.g. weather, lighting, traffic, etc.). Presumably those conditions don’t include requiring the human passenger to pay attention or be ready to resume control.

But this remains a profoundly slippery point. If conditions — weather, time of day, unexpected emergency or police activity — change, then the system is no longer within “defined conditions” and the human is expected to take over. This could happen quite quickly, so in effect it’s very nearly the same situation as Level 3. There’s still no completely bright line between the idea of humans being entirely without responsibility in the vehicle, and being responsible only when conditions are no longer within the operational design domain, whatever those conditions turn out to be or however quickly they change.

There’s still no completely bright line between the idea of humans being entirely without responsibility in the vehicle, and being responsible only when conditions are no longer within the operational design domain, whatever those conditions turn out to be or however quickly they change.

Everyone out of the pool

All of this begs the question of what the vehicle should do if it decides that it is outside of the ODD, for whatever reason. The report explains that this fallback action, or “minimal risk condition”, is to get to a safe spot and pull over. That’s certainly better than what some automakers have designed (such as coming to a complete stop in the current lane). In principle, it’s a very sensible idea, but it’s obviously much more taxing on the autonomous system, which has to make decisions about changing lanes and selecting a location to stop after it has gone outside of its acceptable operational parameters. At best, that’s a lot of planning ahead — at worst, it’s wildly unrealistic.

I told you never to call me here

The report goes on to note that the safety critical systems on board the vehicle — those making actual driving decisions, and also those handling the map data — are isolated from the wireless communications system. That’s consistent with what Waymo CEO John Krafcik said earlier this year, citing concerns about hacking. While this may be best practice from a cyber security point of view, it creates some confusion about exactly when and how the vehicle software can (and should) be safely updated.

If Waymo’s AVs are primarily operated as part of a fleet (as opposed to personal ownership) then presumably this means they will be updated back at a trusted depot/maintenance facility. That’s fine — but it will add to the maintenance requirements of the vehicles. However, this approach essentially rules out any easy update method for personally owned AVs. Would your home network be secure enough for software updates that could potentially crash your car and kill you? Probably not. So will you take it in for regular software updates to your local AV dealer? Sure, right after you update your anti-virus software, save more for retirement, and get back on that diet you’ve been talking about for years.

It’s worth noting that Waymo’s approach is in stark contrast to Tesla, which has been using over-the-air software updates to major parts of the Autopilot suite since 2015. As with any mobile system, over-the-air updates mean that devices are more likely to be running the latest and most secure software, and thus won’t be vulnerable to exploits that were only recently discovered or attacked. But network security isn’t perfect, and the advantages of more frequent software updates via over-the-air methods need to be weighed carefully. Waymo and Tesla have come to fundamentally opposite conclusions on this question, which has profoundly shaped their safety policies.

Simulating safety

There’s an extensive section in the report discussing how Waymo uses simulation to virtually test its vehicles on thousands of variations of difficult road scenarios, and improve how its software handles them. This kind of simulation capability is a powerful advantage that Waymo brings to the self-driving car game, and it’s hard to imagine the traditional car companies (Ford, BMW, etc.) building a similar capability. While simulated miles aren’t as valuable in terms of testing as real-world miles, they’re still very important for validating software performance under a wide variety of conditions.

But network security isn’t perfect, and the advantages of more frequent software updates via over-the-air methods need to be weighed carefully. Waymo and Tesla have come to fundamentally opposite conclusions on this question, which has profoundly shaped their safety policies.

This section again highlights a profound difference between Waymo and Tesla in terms of accumulating testing mileage. Tesla has famously racked up hundreds of millions of on-road testing miles with Autopilot, leveraging the driving of tens of thousands of personally owned vehicles. That may be a hundred times more than Waymo, but not all miles are created equal. Using simulation to amplify the value of real-world data is a very effective strategy, and amplifying via simulation a smaller number of high-value miles (driven with rich sensor data and in difficult conditions) may prove to be more successful than accumulating a much larger quantity of real-world mileage but with coarser sensor input and no simulation component.

So where does all this leave us? Clearly Waymo has thought about their safety strategy, and wants to convince skeptics that autonomous vehicles really can achieve very high levels of safety. Unfortunately, they haven’t yet struck the right balance of clarity and technical detail that’s going to be required — especially since this balance will almost certainly be different for different audiences. Perhaps accidentally, Waymo has become the primary public face of the proposition that autonomous vehicles are now, or soon will be, extremely safe. Let’s hope they can find a way to make that case a little more persuasively in the future.


Published by HackerNoon on 2017/10/21