Posted on July 18, 2024 in Technology

There Is No Autopilot

Since Tesla burst onto the scene with the promise of entirely autonomous vehicles just around the corner, other car manufacturers started racing to keep up. It’s been some time now – what does the autopilot scene look like?

The Human Brain And Driving

The human brain is incredible. It can take information in, process it, and then develop an action plan using that information in microseconds, almost as soon as the data is received.

Furthermore, the enormous library of events a human will experience in their lifetime prepares them to think on their feet! So, even when encountering a never-before-seen scenario, a person can generally take some sort of action to prevent disaster. If something bizarre happens, the average person will be able to respond somehow, whether by swerving around an untethered inflatable clown drifting down the highway or turning around after spotting a sagging road ahead.

If something ordinary happens, like a flock of pigeons startles up and away when someone starts their car, they won’t even make a note of it.

The Computer’s Thought Process

However, when the same things happen to a computer, the computer lags. The computer must first identify the object to react to it, which in itself is a constant problem with most AI vehicles. Then, it must decide on an appropriate course of action. The obvious “easy” answer is to slow down and give the actual driver time to do something about whatever obstacle is in the road, but that presents new issues (accidentally brake-checking the car behind you for a plastic bag, for example) assuming it can actually even do that in the first case.

Early attempts couldn’t! A self-driving Uber killed a pedestrian back in 2018 because it 1) failed to identify the pedestrian as a pedestrian at first, 2) spotted the pedestrian, but was unable to engage the brakes, 3) should have alerted the driver something was in the way and that they must engage the brakes, 4) but didn’t, so the driver was unable to respond in time, hitting the pedestrian and killing her. Similar issues plague Tesla, which has had a steady stream of accidents involving emergency vehicles, because the car seemingly cannot identify an emergency vehicle if it has it’s lights on.

Even if the car can brake, they often do it at inappropriate times. If a flock of birds, a transient non-hazard, burst up in front of a car with autopilot, the car tends to panic and brake until the birds are gone no matter which brand it is. This is because it is totally incapable of processing so many different entities at once, and so instead must ‘play it safe’ in case it’s not actually a flock of birds, but a person on a bicycle or a child chasing a ball. That Uber case from before actually seems to indicate the car thought the pedestrian was on a bicycle, briefly, which delayed it’s response. “Playing it safe” and braking is better than doing nothing, but not a solution all to itself.

The problem with these self-driving and AI-powered cars is that they are not capable of thinking like people. They cannot see information and process it within microseconds – there are times the car cannot process what it’s “seeing” at all. At the moment, there is no truly safe autopilot, and there are a limited number of vehicles where you are able to take your eyes off the road. At best, you might be allowed to take your hands off the steering wheel.  Unpredictable things happen in front of cars all the time, and even if every single car were made autonomous overnight, that wouldn’t solve the issue of the birds, or of lost tires, or pedestrians taller or shorter or faster or slower than the computer expects them to be. It doesn’t solve the car not processing emergency vehicle lights. There is no autopilot.

Sources:

(https://www.youtube.com/watch?v=pmGOjHi-7MM&ab_channel=SomeMoreNews)

https://qz.com/1397504/all-the-things-that-still-baffle-self-driving-cars-starting-with-seagulls

https://www.motortrend.com/news/nhtsa-tesla-autopilot-investigation-shutoff-crash