The Washington PostDemocracy Dies in Darkness

What self-driving cars can’t recognize may be a matter of life and death

Engineers are racing to program artificial intelligence to recognize different scenarios that human drivers know inherently

Washington Post illustration/iStock (Washington Post illustration/iStock)

SAN FRANCISCO — To understand the complexity of programming self-driving cars, experts on autonomous vehicles say, consider a deer, a moose or a cow — all rust-colored, four-legged mammals you might find roaming along the side of the road.

Human drivers know they also behave differently. A machine might not.

“If the system was never trained on that, it would recognize, ‘Oh, there’s something here,’ ” said Danny Shapiro, senior director of automotive for NVIDIA, which is developing self-driving technology. But “it has no idea how it’s going to behave or what it’s going to do.”

Distinguishing between animals that could run into the road is part of the constant engineering struggle to identify and teach these types of differences to vehicles powered by artificial intelligence.

Companies including Alphabet-owned Waymo, General Motors’ Cruise division and Lyft-affiliated Aptiv have been racing to train their vehicles to drive themselves in Silicon Valley, as well as Phoenix, Las Vegas and other cities nationwide. The technology is envisioned to eliminate the need for car ownership and revolutionize the way people get around — something particularly helpful with an aging population.

Concerned with safety, some Silicon Valley residents want self-driving cars off of their streets. (Video: Faiz Siddiqui/The Washington Post)

Silicon Valley pioneered self-driving cars. But some of its tech-savvy residents don’t want them tested in their neighborhoods.

But the inherent risks are numerous, and the vast amount of knowledge needed to train the vehicles is daunting.

The reality of how little some of these vehicles know came to light last week as an investigation by the National Transportation Safety Board of a deadly Uber crash revealed the car was unable to distinguish a person from a vehicle or bicycle, and it wasn’t programmed to know that pedestrians might jaywalk.

Autonomous vehicles use a combination of radar, lidar — complex sensors that use laser lights to map the environment — and high-definition cameras to map their surroundings. When the cars meet a new object, the images are rapidly processed by the car’s artificial intelligence based upon a vast trove of reference images of similarly labeled objects to figure out how to react.

But there are thousands of potential scenarios. An older person might be slower than a runner. A dark spot on the road could be a shadow, a puddle or a pothole. Reflections off buildings could confuse cars that are suddenly seeing themselves.

Uber CEO calls slaying of Jamal Khashoggi ‘a mistake’ and compares it to a self-driving car crash

In some cases, engineers appear to be “programming for what should be, not what actually is,” said Sally A. Applin, an anthropologist and research fellow who studies the intersection between people, algorithms and ethics. “There just seems to be a really naive assumption about various rules — and that the world is going to be the way the rules are, not necessarily the way the world is.”

The Uber incident in particular has created frustration in the autonomous-vehicle community, with many fearing that a few similar crashes could result in tougher regulation and hinder the development of the industry.

In that instance, an Uber vehicle in Tempe, Ariz., fatally hit a pedestrian crossing outside a crosswalk with her bicycle on a darkly lit street in March 2018. The driver supervising the car was looking at her phone, authorities said. The car’s radar detected Elaine Herzberg nearly six seconds before the crash, but the self-driving system didn’t properly classify her or know how to react.

The National Transportation Safety Board, which has not issued a probable cause, will convene Nov. 19 to make its determination.

The safety board’s report said that “pedestrians outside a vicinity of a crosswalk” were “not assigned an explicit goal,” meaning the vehicle would not have predicted the path she might travel the way it might if she were identified as a pedestrian in a marked crosswalk. Instead, it identified her as a vehicle, bicycle and “other.”

Self-driving taxis are here. This is what it's like to ride in one.

In voluntary safety reports filed in 2018 and 2019 from companies including Waymo, Aurora, GM’s Cruise division and Ford, all mention jaywalkers or jaywalking or give reference to pedestrians outside a marked crosswalk. Uber’s, filed in November 2018, does not.

Uber’s report mentions pedestrians but indicates a far more sophisticated system than what played out in Arizona.

“Actors, such as vehicles, pedestrians, bicyclists, and animals, are expected to move,” the report said. “Our software considers how and where all actors and objects may move over the next ten seconds.”

“Our self-driving vehicles will not operate in a vacuum,” it adds.

Post Reports: 'There was a lot of interest in the subject, because the tech industry has staked the future on this idea of self-driving'

Uber spokeswoman Sarah Abboud said the company regrets the crash and has vastly overhauled its self-driving unit since it occurred, adding that Uber “has adopted critical program improvements to further prioritize safety.”

Still, the ripple effects of what some call a glaring programming oversight were already being felt in Silicon Valley.

“A baffling thing,” said Brad Templeton, a longtime self-driving-car developer and consultant who worked on Google’s self-driving-car project about a decade ago. “Everyone knew that eventually there would be accidents because no one imagined perfection. This one’s worse than many people imagined.”

Pedestrian in self-driving Uber crash probably would have lived if braking feature hadn’t been shut off, NTSB documents show

Many autonomous-vehicle industry insiders, some of whom spoke on the condition of anonymity out of fear of retribution, said they were surprised Uber had not accounted for such a basic expectation. They also acknowledged the timeline to roll out this technology is probably longer than many expect because of its complicated nature.

Artificial-intelligence-powered technology, including facial recognition and voice assistants, have drawn criticism for issues including built-in human bias and faulty logic.

As a result, researchers in the field are calling for more caution and diversity when it comes to training AI — particularly for vehicles.

There should be more focus groups and diverse groups of experts working to map out the scenarios, said Katina Michael, a professor in the School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering at Arizona State University. More safety and mechanical engineers should be working on the code alongside software engineers, and it should all be peer reviewed by multiple experts, she said.

When it comes to real-life scenarios, “the most obvious ones haven’t been addressed,” Michael said. “When we don’t do all this scenario planning and don’t do an exhaustive [job] at the front end, this is what happens.”

Reality is going to stall for some time the advent of driverless cars

Some are pushing more off-road simulation as an extra layer of precaution. At Applied Intuition, a team of 50 — including alumni from companies such as Waymo, Apple and Tesla — design simulation software for cars to test scenarios before they’re released into the real world.

A single urban intersection can have “a hundred thousand scenarios,” said Qasar Younis, the founder of Applied Intuition. Those need to be simulated and accounted for before a vehicle can be safely put into operation.

Graphic: How does an autonomous car work? Not so great.

Younis said the simulators, who supply their software to autonomous-vehicle companies, want to test for “edge cases,” the real-world situations that might be a one-in-a-million occurrence but could be fatal when they happen.

He gives the example of an autonomous semi-truck that encounters a parked vehicle on the shoulder of a highway as it enters a crosswind. The truck doesn’t initially see the motorist emerging from behind the car on the shoulder, potentially going into the lane, as the wind strikes the trailer that could swing in the person’s direction.

“That’s a scenario you’re not going to want to test in the real world,” he said.

Loading...