Questionable Safety And Autopilot

It’s not clear what the crash record looks like for autonomous vehicles – a handful of concerning accidents marked by a bizarrely timed disengage of autopilot are not technically considered “autopilot crashes”. Survivors of these accidents then have to go on and somehow prove the auto-pilot was at fault, not them. Whose fault is it when the AI is still going 50 and approaching the back of a stopped semi-truck or a tree on the side of the road? Is it the driver’s, for not stopping in time? Is it the AI, for not spotting the hazards? It’s a bit of both.

The pitched abilities of the driverless cars temporarily eclipsed their actual abilities, and now the companies producing these vehicles have to reel back the customer’s expectation of “self-driving”. The truth is that there are many levels of autonomy – the most basic includes the auto-braking and cruise control systems already seen in cars for years, while the highest end of the spectrum (once available) will be able to drive safely without anyone in the car at all. Such a vehicle doesn’t currently exist – the cars closest to that goal can become confused and clog roads, or fail to see a person and injure them, or otherwise suffer all the same issues the back of the pack suffers, if less frequently.

These companies did such a great job at pitching their product that cities like San Fransisco allowed them permission to operate driverless 24/7, in spite of many valid complaints. In 2023, for example, San Fransisco’s fire department complained that Cruise vehicles had interfered with their emergency response in the city 55 times across eight months. Eventually, Cruise was forced to reduce it’s fleet in the city, much to the relief of the people living there.

As stated in the previous article, there are no cars capable of solving issues the way people are. Thus, there is no truly safe autonomous vehicle yet. Beyond safety, they’re often just clumsy – when a car is stopped in the turn lane with it’s hazards on, ordinary people know to get into that turn lane after the car, not before. When people see traffic cones, they know they are allowed to cross lines on the road to follow those cones. A person in a car sees another person on the sidewalk and can tell what direction they’re facing, and usually, from that info, whether they intend to cross the road or not. Cars cannot consistently do any of these things. Cars flip out and freeze, or they keep going, at speed, creating a hazard for any entity or object it doesn’t recognize in the process.

That seems to suggest the car is at fault. It is, in many ways; autonomous vehicles are often pitched as a solution for crashes caused by human error, and yet here these driverless cars sit, not pulling over for fire trucks, not responding at all in a pileup, when even the newest drivers know you have to pull over for flashing lights and sirens. A larger issue is the car companies themselves, and how they interact with the drivers or passengers: Cruise says they’re ready, so people believe them. Tesla was pitching their cars as if they were already fully autonomous, and it was simply a silly technicality of the law that you had to sit in the front seat while it was active. Customers who end up in crashes with these things largely behaved as though these cars can do things they cannot, and the advertising is responsible for that. There are a lot of things you can’t lie or stretch about a car. The top speed or 0-60 time are both testable. How could a customer hope to discover on their own that the safety features in their futuristic new can cannot spot a child, or an emergency vehicle? They have to take the car company’s word for it, and the car companies have screwed them over.

The self driving features were a big selling point for a lot of these brands, and the customer would need to turn their back on their investment to believe the news about these crashes. If they do believe the reports, the car programmers are still hedging their bets on “the autopilot shut off shortly before the incident, meaning the driver was at fault and not the car”. The customer cannot win. They don’t get their futuristic car or the benefit of the doubt should an accident occur.

If you’ve seen a recent smart-car commercial, you may notice that they are instead pitching “advanced cruise control” rather than “self driving”. The image created by advertising suggested that someone would be able to turn on driver-assist and read the newspaper while the car drove to work, and customers internalized that. Some may have even bought a particular car for that reason. That kind of messaging is hard to undo; it’s even harder for the customer to figure out the true level of their car’s abilities when conflicting information comes out about crashes involving autopilot. For now, though, car companies are slowly being forced to admit that they aren’t nearly as autonomous as they say they are, and hopefully that will help curb some of the trouble they’ve caused.

sources:

https://www.autoinsurance.org/which-states-allow-automated-vehicles-to-drive-on-the-road

https://www.cnn.com/2023/08/14/business/driverless-cars-san-francisco-cruise/index.html

https://www.npr.org/2021/10/23/1048723026/what-does-the-future-of-driverless-cars-look-like

https://qz.com/1397504/all-the-things-that-still-baffle-self-driving-cars-starting-with-seagulls

https://www.motortrend.com/news/nhtsa-tesla-autopilot-investigation-shutoff-crash