The promise of self-driving cars is the promise of Silicon Valley to make the world a better place. Tesla CEO Elon Musk, who lives in Los Angeles, has made a reasonable argument about this vehicle technology, which he hopes to have available to his customers within 10 years: Humans make terrible drivers, and “1.2 million people … die every year in manual crashes.” So-called robot vehicles have a much better record, though there are only a few on the road, in beta mode. Critics of the technology are “killing people,” he told reporters last year.

The Santa Monica–based nonprofit Consumer Watchdog doesn't deny that motorists aren't all Lewis Hamiltons. But its new report, “Self-Driving Vehicles: The Threat to Consumers,” suggests there could be more deaths with robot cars than without them, at least at first. The report argues that developers of self-driving cars — Tesla and Google among them — want to make the world a better place … for them to make money.

“No one disputes that the evolution of motor vehicle technology has the potential to prevent deaths, injuries and property damage,” according to the report. “New technologies such as automatic emergency braking, lane keeping, collision warning and assisted parking are already doing so, and indeed should be made standard equipment in all vehicles.”

Consumer Watchdog founder Harvey Rosenfield, who authored the report, says folks like Musk are selling a promise of safety that doesn't exist. “What we're witnessing in the marketplace is a lot of hype and exaggeration driven in large part by financial considerations by companies old and new trying to jockey for position,” he says.

These corporations, for example, could secretly embrace profit over safety, according to the report: “These [self-driving] algorithms will be responsible for life-and-death decisions that will place their financial interests in conflict with their customers’ lives. But Google and other software developers have refused to disclose to the public or regulators the robot values that they are programming into their computers to replace human values and judgment.”

They have disclosed, per state mandate, accidents involving their beta-tested automated vehicles in California. Most have been minor, and almost all have been blamed on human error. “If there's an accident, they're going to blame the human,” Rosenfield says.

The report argues that, because the vehicles will be run by software, which isn't foolproof in less critical applications such as laptops, the transportation will be on thin technological ice in any case. It also states that the infrastructure, which would include sensor-lined roads and signs, isn't even close to being rolled out from coast to coast. Finally, it says that the more cars are connected, the more they'll be vulnerable to hacking. “If the Russians hacked all of the cars in downtown Los Angeles, that could be deadly,” Rosenfield says.

“Given all these unique risks, whatever safety improvements there are, they're going to be accompanied by brand-new risks,” he says. “Driverless vehicles are going to be far more risky and pose much bigger safety issues than the cars we have today.”

Interestingly, another Santa Monica organization, think tank Rand, concluded in a study in 2014 that automated vehicles would be good for humankind because they would “likely reduce crashes, energy consumption and pollution.”

A spokeswoman for Tesla called the report “reckless.” “Given that independent studies have found Autopilot to reduce accident rates by 40 percent, it is illogical and reckless for a consumer advocacy group to try to convince consumers that the technology is bad for them,” she said via email. “This report is based on misinformation and is damaging to the very people it’s meant to protect.”

*The story was updated and edited at 1:42 p.m. Thursday, June 15 to include feedback from Tesla.

Advertising disclosure: We may receive compensation for some of the links in our stories. Thank you for supporting LA Weekly and our advertisers.