The experiment quickly ran into problems. In one case, an
autonomous Volvo zoomed through a red light on a busy street in
front of the city’s Museum of Modern Art.
Uber, a ride-hailing service, said the incident was because of
human error. “This is why we believe so much in making the roads
safer by building self-driving Ubers,” Chelsea Kohler, a company
spokeswoman, said in December.
But even though Uber said it had suspended an employee riding in
the Volvo, the self-driving car was, in fact, driving itself when
it barreled through the red light, according to two Uber
employees, who spoke on the condition of anonymity because they
signed nondisclosure agreements with the company, and internal
Uber documents viewed by The New York Times. All told, the mapping
programs used by Uber’s cars failed to recognize six traffic
lights in the San Francisco area. “In this case, the car went
through a red light,” the documents said.
At first read, it sounds like Uber is saying there was a human
driving the car. But if you parse it closely, it could also be the
case that the car was in autonomous mode, and the “human error”
was that the human behind the wheel didn’t notice the car was
going to sail through a red light, and failed to manually activate
the brake. I think that’s what happened — otherwise the statement
wouldn’t be ambiguous.
Another case where lying has made a situation much worse. Everyone now knows the truth — their self-driving car was caught running a red light in downtown San Francisco — and the company’s (already questionable) credibility is shot. No one will believe a word the company says about future incidents with its autonomous cars.