Google Brings Self-Driving Cars to Texas: But Are They Safe?

Justin Sullivan/Getty Images
Justin Sullivan/Getty Images

While waiting on final regulations from the California DMV, Google seems to be trying to speed up the process of bringing these kinds of vehicles to market — but some safety agencies and public watchdog groups would like the company to put on the brakes.

After driving its test vehicles some 2.2 million miles, Google says its cars have not caused any accidents on the road. The cars have, however, been involved in 17 collisions, and documentation is available for public review. Although Google representatives declined to comment for this story, the company has made its frustration with the slow pace of California state regulations evident. A recent CBS story illustrates some of the actions and statements that have experts characterizing Google as “pushing” for faster rule adoption.

But there’s also another big sign that Google’s patience with California partners may be wearing out. This Associated Press story shows that Google is adding Austin, Texas, to its list of test locations, choosing a locale outside of California for the first time. Reporters note that Texas has no concrete restrictions on self-driving cars, which changes the equation on autonomous car testing. AP staffers also suggest in the piece that there is evidence that Google lobbyists have influenced comments by Austin’s mayor, and that the company is likely to enjoy a more aggressive testing regimen in the Lone Star State.

But although Google is pressuring the California DMV to move faster, agencies like the National Highway Traffic Safety Administration are praising the state offices for their careful and judicious handling of new autonomous car regulations.

Source: Delphi

In addition, some safety groups are speaking out about Google’s plans.

John Simpson is the Privacy Policy Director of Consumer Watchdog. In comments made to The Cheat Sheet earlier this month, Simpson talked about the ambiguity behind Google’s contention that its cars never cause accidents.

Simpson discussed a particular documented case where a Google car, assuming right of way to take a right turn on a red light, stopped in mid-turn as an additional safety precaution, and a human driver rear-ended the car.

“If you rear-end somebody, you’re technically at fault,” Simpson said, noting how the company can absolve its vehicles even when they may act in ways that confuse human drivers. “This is going to be the biggest problem — how robot cars interact with human drivers.”

Also, Simpson said, there’s likely to be some interplay between two dominant models of thought when it comes to the design of self-driving cars. One approach is the “autopilot” approach where cars are equipped with systems that can temporarily take over from human drivers. These cars would still retain the conventional steering wheel and gas and brake pedals, so that drivers could reassume command. Another style of self-driving car would be, in Simpson’s words, more of a “fleet model” that takes all power away from the driver. Google is lobbying for the green light for these kinds of tests cars, but a fully autonomous car without human controls is an even more controversial proposition than a conventional test car.

Experts expect that more details will come out during the lengthy process that’s mandatory for moving the chains on autonomous car regulations. There’s likely to be at a public meeting in December or January to get public comment, after which some form of draft regulations would be introduced. Next, other public hearings must be done before final rules are released. Look for more activity around this emerging issue as we continue to wonder: How will self-driving cars be regulated?