Tesla Takes ‘Death Before Dishonor’ Approach With Autopilot

Tesla CEO Elon Musk was the first to use the word “hubris” when referring to his company’s design and roll-out of the Model X SUV. Indeed, the electric car maker bit off more than it could chew with that vehicle, but that was the good type of hubris.
A worse sort has emerged during the company’s response to a crash that occurred with Tesla Autopilot enabled. Saying it would stay the course and not bother with “media speculation,” Tesla told Consumer Reports it will continue offering the semi-autonomous drive features in its “public beta phase.” In essence, it is allowing consumers to risk their lives on what has proven to be flawed technology.
Tesla Autopilot death circumstances

The driver who was crashed and died with Autopilot enabled on May 7 was in circumstances that would be scary for anyone on the road. According to the Tesla blog, a tractor trailer made a left turn on a divided highway, crossing in front of the Model S in question. Since “neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky,” the brake was not applied and the fatal crash occurred.
Clearly, the driver was not paying attention to the events unfolding in front of him. Brightly lit sky or not, it is difficult to miss a tractor trailer crossing in your path, whether by sight or by the sound its roaring diesel engine would make. So the situation involved a driver who was put at ease by Autopilot in the circumstances the technology is designed for (highway driving). Herein lies the danger.
The Handoff Problem

In its lengthy blog post encouraging Tesla to suspend Autopilot capabilities in its cars, Consumer Reports cited the issue known as the “Handoff Problem.” An NHTSA study revealed drivers took between three and 17 seconds to retake control of a vehicle that had been operating autonomously, opening the door to potentially fatal situations, including the one that led to the crash and death of the Tesla driver in May.
Every automaker working with self-driving technology has had to grapple with these risks in one form or another. Until Level 4 (completely self-driving) autonomy reaches the market sometime around 2020, many automakers — including Ford and tech giant Google — have decided to sit out the transitional period.
As Ford’s Ken Washington told Wired last November, “Right now, there’s no good answer … We’re really focused on completing the work to fully take the driver out of the loop.” Cars without steering wheels and pedals would be the clear way to go.
The Tesla response

Besides asking Tesla to disable Autopilot, Consumer Reports urged the automaker to provide clearer guidelines on the system’s limitations. In addition, the testing agency said the name Autopilot should go because of its “misleading” nature. For most people, the word suggests being able to tune out and take a mental break — things not advisable during highway driving.
According to a report by Bloomberg, experts in product liability concur that the name was a mistake by Tesla. If future crash victims decide to take the company to court, they may have an opening in the wording and the known vulnerabilities involved with the tech’s operation.
Tesla dismissed the Consumer Reports suggestion in what can be interpreted as arrogant fashion, saying “it appreciated well-meaning advice” but said it made decisions based on “real-world data, not speculation by media.” We take it Tesla considers data showing a minimum three-second and maximum 17-second response time for drivers in autonomous-drive mode as “out of this world.”
Autopilot is indeed a choice, like Ludicrous mode or Sport mode in any car allowing the driver to accelerate faster. Yet faster drive modes heighten the response times of drivers rather than dull them. Instead of admitting a mistake and pulling the technology until it gets better, Tesla is signaling it will tolerate death as Autopilot continues in beta mode. Apparently, dishonor would be much harder to stomach.
Connect with Eric on Twitter @EricSchaalNY