Experts from across fields have long predicted that autonomous vehicles will be the future. Although we haven’t quite reached the point [1] of development for fully autonomous vehicles, the future is near.
Criticisms of autonomous vehicles still exist, and many complicated scenarios are bound to come to the forefront [2] as autonomous vehicles begin to populate the streets and our homes. Indeed, car accidents with autonomous vehicles have already happened in our present. The below case studies will serve to provide detailed breakdowns of past autonomous vehicle accidents.
Tesla’s driverless car accident: In May of 2016, during a bright daylight drive, a Tesla Model S was operating on autopilot mode (which serves as driver assistance mode, and isn’t fully autonomous). The driver who had directed the Model S onto a highway, was fully reliant on this mode, and did not react in time when a large white semi suddenly cut in front of the Model S. The car collided with the semi. It is worth noting that per the guidelines of the autopilot mode, the Model S should have begun braking instantaneously.
The police statement said that the Model S’s vehicle detection system failed to differentiate the white of the brilliant sky and the white color of the semi. The police report ultimately concluded that the deceased driver was at fault for not having his hands on the wheel in spite of the car’s warnings.
About two years later, in March 2018, a Model X suffered a similar malfunction, resulting in a fatal car c rash. The autopilot again failed, resulting in the vehicle crashing directly into a barrier. It then caught fire, and two other cars collided with it.
There has yet to be a consensus on the latter case, as police are still investigating the ultimate cause of the crash.
Google’s driverless car accident: On a sun streaked day of September in 2016, a truck driver ran a red light and collided directly with one of Google’s self-driving vehicles. Though there were no casualties, the side of Google’s car was completely crushed.
Shortly thereafter, Google issued a statement, saying: “in accidents involving Google’s self-driving cars, it has almost always been the other driver who was at fault” and added “our task at hand is developing an autonomous vehicle technology that can react appropriately to other unpredictable drivers on the road.”
Although it is fortunate that nobody was harmed, Google’s statements left some people wondering.
Uber’s driverless car accident: On one particularly dark night on March 18th, 2018, a woman was crossing the street illegally. At the time, Uber’s self-driving Volvo was passing through the same street, traveling about 60 kilometers per hour. An Uber employee was at the wheel, but the car did not detect a person crossing the road in the dark. The Volvo struck [3] the pedestrian, killing her instantly.
The authorities issued a statement asserting that “it would have been difficult for even a human driver under the same circumstances to avoid an accident.” However, after video footage of the accident was released, many are blaming the vehicle’s detection technology for the incident.
Experts argue that though the victim was not at a crosswalk, the technology that the Volvo was equipped with, its sensors and radars should have been more than sufficient to bring the vehicle to a full stop.
Others argue that there was a bug in the system, and that Uber is ultimately to blame, since autonomous vehicles are not advanced enough to take on unpredictable situations.
Having now read these three incidents of autonomous vehicles gone wrong, what are your thoughts? Do you feel comfortable facing a future in which autonomous vehicles run the roads?