paint-brush
Driverless Vehicles: There's No Such Thing As Too Much Safetyby@sshwartz
134 reads

Driverless Vehicles: There's No Such Thing As Too Much Safety

by Steve ShwartzNovember 28th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

There were over 37,000 fatal accidents in the US in 2017. Someday, driverless cars may make our roads safer. Driverless cars and taxis might someday provide mobility for people with limited mobility. The average American spends 97 hours a year, the equivalent of two and a half work weeks stuck in traffic. We do not want autonomous vehicles to crash, but we also want them to stop and block traffic every time they encounter an obstacle. The question is whether all this testing has made driverless vehicles safe.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Driverless Vehicles: There's No Such Thing As Too Much Safety
Steve Shwartz HackerNoon profile picture

State and local governments are starting to permit the testing of driverless cars. This will likely result in accidents and traffic jams.

There were over 37,000 fatal accidents in the US in 2017. Someday, driverless cars may make our roads safer.

At the same time, millions of seniors and people with disabilities suffer from limited mobility. Driverless cars and taxis might someday provide mobility for this population.

The US federal government is trying to balance consumer safety with the goal of achieving technological leadership in automated vehicles. But safety advocates are concerned that the balance is shifting towards technology and away from safety. The title of the 2017 autonomous vehicle report by the National Highway Safety Transportation Authority (NHTSA) emphasized safety and was titled Automated Driving Systems: A Vision for Safety.

By 2020, the tides had shifted. The title of the 2020 NHTSA report emphasized technology leadership and was titled Ensuring American Leadership in Automated Vehicle Technologies.

Consumer Reports criticized the US National Highway and Transportation Safety Administration (NHTSA) as having a “dangerously lax hands-off approach” to the safety of driverless vehicles.

https://www.nhtsa.gov/automated-vehicles-safety/av-test-initiative-tracking-tool

The map above shows the locations of self-driving vehicle tests across the US.

Driverless Vehicle Testing

Many manufacturers are testing driverless cars and other types of autonomous vehicles on our roads. Most of these tests are occurring with a safety operator behind the wheel who is responsible for taking over the steering, braking, and acceleration whenever they detect an unsafe situation. However, we are also starting to see the testing of driverless cars.

California has issued permits to four companies for tests of driverless taxis: AutoX (Alibaba), Waymo (Google), Zoox (Amazon), and Cruise (GM). These initial permits are only for a handful of driverless cars operating at relatively low speeds primarily in the Bay Area. You can find a US map of all self-driving vehicle tests (with and without a safety operator behind the wheel) here.

In California, the driverless cars are primarily being tested as taxis. The self-driving cars that are being operated without a safety driver are required to have a remote operator monitoring the vehicle. The California law requires the remote operator to continuously monitor the vehicle and to be able to communicate with passengers and police if there is an issue.

The law specifies that “A remote operator may also have the ability to perform the dynamic driving task” for the driverless car. I hope that all the manufacturers are testing driverless cars with remote operators who are viewing the camera outputs continuously and can take over with remote control if there is a safety issue or an unexpected occurrence.

The manufacturers who are running these driverless car tests have all done extensive testing with safety operators behind the wheel and use simulators to train and test the self-driving capabilities. The question is whether all this testing has made driverless cars safe. Will they reduce or eliminate accidents or will they cause accidents?

A secondary but important question is whether they can make our roads safer without driving so slowly that they increase congestion on our roads. The average American spends 97 hours a year, the equivalent of two and a half work weeks stuck in traffic.

In early 2020, Moscow hosted a driverless car competition. Shortly after it began, a vehicle stalled out at a traffic light. Human drivers would reason about this edge case and decide to just go around the stalled car.

However, none of the driverless cars did that, and a three-hour traffic jam ensued. We do not want autonomous vehicles to crash, but we also do not want them to stop and block traffic every time they encounter an obstacle.

Edge Cases Are A Barrier to Driverless Car Safety

Most of us have encountered unexpected phenomena while driving: A deer darts onto the highway. A flood makes the road difficult or impossible to navigate. A tree falls and blocks the road. The car approaches the scene of a car accident or a construction zone. A boulder falls onto a mountain road. A section of new asphalt has no lines. You notice or suspect black ice. Drivers are fishtailing, trying to get up an icy hill. An elderly driver falls asleep and his car heads right for you (this happened to me). We all have our stories.

People do not learn about all these possible edge cases in driving school. Instead, we use our commonsense reasoning skills to predict actions and outcomes. If we hear an ice cream truck in a neighborhood, we know to look out for children running towards the truck. We know that when the temperature is below 32 degrees and there is precipitation on the road, and we are going down a hill that we need to drive very slowly.

We change our driving behavior when we see the car in front of us swerving, knowing that the driver might be intoxicated or texting. If a deer crosses the road, we are on the lookout for another deer because our commonsense knowledge tells us they travel as families. We know to keep a safe distance and handle passing a vehicle with extra care when we see a truck with an “Extra Wide Load” sign on the back.

When we see a ball bounce into the street, we slow down because a child might run into the street to chase it. If we see a large piece of paper on the road, we know we can drive over it, but if we see a large shredded tire, we know to stop or go around it.

Unfortunately, no one knows how to build commonsense reasoning capabilities into driverless cars or into AI systems in general. Without commonsense reasoning capabilities to handle these unanticipated situations, self-driving vehicle manufacturers have to anticipate every possible situation.

Driverless cars can only handle the situations that have been anticipated in their software code. Machine learning can only help to the extent that manufacturers anticipate every situation and provide training examples of every possible situation.

Crashes and traffic jams will likely result when autonomous vehicles encounter unanticipated situations for which there is no training or programming.

It will be difficult, if not impossible, for manufacturers can anticipate every edge case. It will certainly be easier to do this for slow-moving shuttles on corporate campuses. It is hard to imagine that this is possible for self-driving consumer vehicles that can drive anywhere. Self-driving taxis in small, heavily-mapped areas are somewhere in between.

Cars Do Not “See” Like People

Another difficulty for driverless cars is that they don’t “see” like people. Object detection systems represent tremendous victories for deep learning and AI; however, they also make surprising mistakes.

If I train a system to distinguish cats from dogs, and in the training table, all the pictures of dogs are outside, and all the images of cats are inside homes, the deep learning system will likely key in on yard and home features. Then if I show a picture of a dog inside a home, the system will probably label it a cat.

Similarly, in the image above, before researchers pasted the guitar onto the picture, both object recognition systems and human subjects correctly labeled the monkey in the photo. Adding the guitar did not confuse people, but it made the object recognition system think it was observing a picture of a person instead of a monkey. The object classification system did not learn the visual characteristics that people use to recognize monkeys and people. Instead, it learned that guitars are only present in pictures of people.

Other types of mistakes are even more concerning. A group of Japanese researchers found that by modifying just a single pixel in an image, they could alter an object recognition system’s category choice. In one instance, by changing a single pixel on a picture of a deer, the object recognition system was fooled into identifying the image as a car.

Researchers have also figured out how to fool deep learning systems into recognizing objects such as cheetahs and peacocks in images with high confidence even though there are no objects at all in the image.

Other researchers showed that minor changes to a speed limit sign could cause driverless cars to think the sign says 85 mph instead of 35 mph and unsafely accelerate as a result. Similarly, some Chinese hackers tricked Tesla’s autopilot into changing lanes. In both cases, these minor changes fooled cars but did not fool people.

In real-world driving, many Tesla owners have reported that shadows of tree branches often fool their car into thinking that the shadows are real objects.

When an Uber self-driving car killed a pedestrian in 2018, the car’s object recognition software first classified the pedestrian as an unknown object, then as a vehicle, and finally as a bicycle.

I do not know about you, but I would rather not be on the road as a pedestrian or a driver if driverless cars cannot recognize pedestrians with 100% accuracy. What if a self-driving car mistakenly perceives a baby stroller as a low-flying piece of paper? It might drive right through it!

Deep learning system errors also can provide targets for bad actors who might decide to attack deep learning systems. For example, a bad actor might devise ways of confusing cars or trucks into driving off the road.

Should You Be Worried?

Take a look at these videos of a Tesla making mistake after mistake. While Tesla says it has “full self-driving” capabilities, its website makes it clear that its cars are not fully autonomous vehicles: “Autopilot is a hands-on driver assistance system that is intended to be used only with a fully attentive driver. It does not turn a Tesla into a self-driving car nor does it make a car autonomous.”

You should absolutely be worried. What happens when a driverless car runs over a baby stroller? The manufacturers will likely respond that their cars are safer than human drivers. Maybe so. But before we allow driverless cars on the road, we should require manufacturers to prove this is true. The manufacturers have the necessary data.

They know when safety drivers are concerned enough to take over the wheel due to safety concerns. I don’t think we should allow these vehicles on the road without at least a remote safety operator until the number of such disengagements is lower than the average number of accidents per vehicle in the target driving area.

But this is not happening. States are allowing driverless cars on the road without proof of safety.

The only saving grace is that manufacturers can still be sued under product liability laws. This should give them pause before putting unsafe cars on the road. Some driverless car advocates have proposed the elimination of liability laws for AV manufacturers. In my view, this would be a complete disaster.

I don’t want driverless cars and trucks in my neighborhood. They haven’t been proven to be safe. If you feel the same way, write your federal and state legislators and demand proof of safety and manufacturer liability.

Please enter any comments below and feel free to visit AI Perspectives where you can find a free online AI Handbook with 15 chapters, 400 pages, 3000 references, and no advanced mathematics.

This post was originally published on Steve’s Blog.