Oils4AsphaultOnly wrote: GRA wrote:
Oils4AsphaultOnly wrote:Terrific. Now let's have Tesla release all their data which Elon has claimed show that A/P-operating Teslas are safer than non-A/P cars. Professional statisticians pointed out the numerous methodological flaws behind his claims at the time he made that statement, and Consumer Reports and other auto safety organizations have asked for that data to be released. I'll be perfectly happy to acknowledge that semi-autonomous systems such as A/P have lead to an overall reduction in accidents (if not a reduction in accidents that A/P is responsible for) if the data is validated by an independent entity and shown to be scientifically valid.
As I noted previously (maybe in other topics) every TACC/AEB system is currently unable to handle this sort of event reliably because they are unable to recognize real positives among the false ones, and as such fails to meet the necessary safety requirements. Since people will continue to use the systems improperly either due to misunderstanding their capabilities (which leads to automation complacency), such systems are simply too ineffective to be safe for use by the general public
. As has been previously mentioned, lack of understanding of system capability has been and is a major problem in automation-involved aviation accidents, even among highly-trained commercial/military pilots, never mind the much less qualified and trained general driving public. Until the level of idiot-proofing for these systems is much higher than it currently is, they don't belong in the public sphere. As an extreme example of automation complacency: https://www.youtube.com/watch?v=pJ4-2d7C6gg
Then your opinion goes contrary to Consumers Union: https://www.consumerreports.org/car-saf ... ing-guide/
Safer is better than waiting for safest. Considering your tagline, you're a hypocrite.
No hypocrisy at all - I'm a big fan of AEB, as it provides an extra level of safety backstopping a human driver, and if/when I buy a new car it must be equipped with AEB. But AEB isn't a substitute for a human driver, which is what AVs must be. OBTW, about that NHTSA stat you quoted:
TESLA'S FAVORITE AUTOPILOT SAFETY STAT JUST DOESN'T HOLD UP
FOR MORE THAN a year, Tesla has defended its semiautonomous Autopilot as a vital, life-saving feature. CEO Elon Musk has lambasted journalists who write about crashes involving the system. “It's really incredibly irresponsible of any journalists with integrity to write an article that would lead people to believe that autonomy is less safe,” he said during a tumultuous earnings call this week. “Because people might actually turn it off, and then die.”
This wasn’t the first time Musk has made this argument about Autopilot, which keeps the car in its lane and a safe distance from other vehicles but requires constant human oversight, and has been involved in two fatal crashes in the US. “Writing an article that’s negative, you’re effectively dissuading people from using autonomous vehicles, you’re killing people,” he said on an October 2016 conference call.
Wednesday’s haranguing, however, came a few hours after the National Highway Traffic Safety Administration (NHTSA) indicated that Tesla has been misconstruing the key statistic it uses to defend its technology. Over the past year and a half, Tesla spokespeople have repeatedly said that the agency has found Autopilot to reduce crash rates by 40 percent. They repeated it most recently after the death of a Northern California man whose Model X crashed into a highway safety barrier while in Autopilot mode in March.
Now NHTSA says that’s not exactly right—and there’s no clear evidence for how safe the pseudo-self-driving feature actually is.
The remarkable stat comes from a January 2017 report that summarized NHTSA’s investigation into the death of Joshua Brown, whose Model S crashed into a truck turning across its path while in Autopilot mode. According to its data, model year 2014 through 2016 Teslas saw 1.3 airbag deployments per million miles, before Tesla made Autopilot available via an over-the-air software update. Afterward, the rate was 0.8 per million miles. “The data show that the Tesla vehicles' crash rate dropped by almost 40 percent after Autosteer installation,” the investigators concluded.
Just a few problems. First, as reported by Reuters and confirmed to WIRED, NHTSA has reiterated that its data came from Tesla, and has not been verified by an independent party (as it noted in a footnote in the report). Second, it says its investigators did not consider whether the driver was using Autopilot at the time of each crash. (Reminder: Drivers are only supposed to use Autopilot in very specific contexts.) And third, airbag deployments are an inexact proxy for crashes. Especially considering that in the death that triggered the investigation, the airbags did not deploy.
Tesla declined to comment on NHTSA’s clarification.
The statistic has been the subject of controversy for some time. The research firm Quality Control Systems Corp. has filed a Freedom of Information Act lawsuit against NHTSA for the underlying data in that 2017 report, which it hopes to use to determine whether the 40 percent figure is valid. NHTSA has thus far denied its FOIA requests, saying it agreed to Tesla’s requests to keep the data confidential, and that its release could threaten the carmakers’ competitiveness.
Tesla’s oft-touted figure is flawed for another reason, experts say: With this data set, you can’t separate the role of Autopilot from that of automatic emergency braking, which Tesla began releasing just a few months before Autopilot. According to the Insurance Institute for Highway Safety, vehicles that can detect imminent collisions and hit the brakes on their own suffer half as many rear-end crashes as those that can’t. (More than 99 percent of cars Tesla produced in 2017 came equipped with the feature standard, a higher proportion than any other carmaker.)
Which is all to say, determining whether a new feature like Autopilot is safe, especially if you don’t have access to lots of replicable, third-party data, is super, super hard. Tesla’s beloved 40 percent figure comes with so many caveats, it’s unreliable.
The Insurance Institute for Highway Safety has tried to come at the question another way, by looking at the frequency of insurance claims. When it tried to separate Model S sedan incidents after Autopilot was released, it observed no changes in the frequency of property damage and bodily injury liability claims. That indicates that Autopilot drivers aren’t more or less less likely to damage their cars or get hurt than others. But it did find a 13 percent reduction in collision claim frequency, indicating sedans with Autopilot enabled got into fewer crashes that resulted in collision claims to insurers.
Oh, but it gets more complicated. IIHS couldn’t tell which crashes actually involved the use Autopilot, and not just sedans equipped with Autopilot. And it’s way too early for definitive answers. “Since other safety technologies are layered below Autopilot, it is difficult to tease out results for Autopilot alone at this time,” says Russ Rader, an IIHS spokesperson. “Data on insurance claims for the Model S are still thin.”
Over at MIT, researchers frustrated with the dearth of good info on Autopilot and other semiautonomous car features have launched their own lines of inquiry. Human guinea pigs are now driving sensor- and camera-laden Teslas, Volvos, and Range Rovers around the Boston area. The researchers will use the data they generate to understand how safely humans operate those vehicles.
The upshot is that Autopilot might, in fact, be saving a ton of lives. Or maybe not. We just don’t know. And Tesla hasn’t been transparent with its own numbers. “You would need a rigorous statistical analysis with clear data indicating what vehicle has it and what vehicle doesn’t and whether it’s enabled or whether it isn’t,” says David Friedman, a former NHTSA official who now directs car policy at Consumers Union. Tesla said this week that it would begin publishing quarterly Autopilot safety statistics, but did not indicate whether its data would be verified by a third party.
NHTSA, too, could be doing a better at holding innovative but opaque carmakers like Tesla accountable for proving the safety of their new tech. “To me, they should be more transparent by asking Tesla for disengagements of the system: How often the systems disengaged, how often the humans need to take over,” Friedman says. California’s Department of Motor Vehicles requires companies testing autonomous vehicles in the state to provide annual data on disengagements, to help officials understand the limitations of the tech and its progress.
Tesla is not alone among carmakers in trying to shield sensitive info from the public. But today, humans are deeply bewildered about the semiautonomous features that have already made their way into everyday drivers’ garages. . . .
Tesla still hadn't released the data which they say supports their claim, leading this past May to:
Consumer Groups Demand FTC Investigation Into Tesla Autopilot
On Wednesday, the Center of Auto Safety and Consumer Watchdog mailed a joint request to the chairman of the Federal Trade Commission, Jospeh Simons, requesting that the FTC investigate how Tesla Motors has marketed its controversial "Autopilot" semiautonomous driver aid suite.
In the letter, the two organizations accuse Tesla of "deceiving and misleading consumers into believing that the Autopilot feature of its vehicles is safer and more capable than it actually is." The groups cite two known deaths and one injury as a result of drivers relying on Autopilot to control their vehicle as reason to investigate the marketing of Autopilot. They insist that the FTC examine Tesla's advertising practices surrounding the feature to determine whether Tesla can be faulted for its customers' misuses of Autopilot. . . .
Since it's Tesla-specific examples you wan, here's another:
The Tesla’s automated vehicle control system was not designed to, and could not, identify the truck crossing the Tesla’s path or recognize the impending crash. Therefore, the system did not slow the car, the forward collision warning system did not provide an alert, and the automatic emergency braking did not activate.
The Tesla driver’s pattern of use of the Autopilot system indicated an over-reliance on the automation and a lack of understanding of the system limitations.[
If automated vehicle control systems do not automatically restrict their own operation to conditions for which they were designed and are appropriate, the risk of driver misuse remains.
The way in which the Tesla “Autopilot” system monitored and responded to the driver’s interaction with the steering wheel was not an effective method of ensuring driver engagement[.
"An over-reliance on the automation. . .," Automation complacency, anyone?
"and lack of understanding of the system limitations". H'mm, mandatory pre-purchase and recurrent training requirements?
"The Tesla’s automated vehicle control system was not designed to, and could not, identify the truck crossing the Tesla’s path or recognize the impending crash. Therefore, the system did not slow the car, the forward collision warning system did not provide an alert, and the automatic emergency braking did not activate . . . If automated vehicle control systems do not automatically restrict their own operation to conditions for which they were designed and are appropriate, the risk of driver misuse remains." H'mm, full disclosure of autonomous system limitations (missing in this case, as no mention of the lack of ability to detect and properly classify crossing traffic had been made by Tesla or anyone else to the public prior to this crash), along with more mandatory driver training, or else (preferred) "restricting operation to only those conditions for which they are designed and are appropriate."
“While automation in highway transportation has the potential to save tens of thousands of lives, until that potential is fully realized, people still need to safely drive their vehicles,” said NTSB Chairman Robert L. Sumwalt III. “Smart people around the world are hard at work to automate driving, but systems available to consumers today, like Tesla’s ‘Autopilot’ system, are designed to assist drivers with specific tasks in limited environments. These systems require the driver to pay attention all the time and to be able to take over immediately when something goes wrong. System safeguards, that should have prevented the Tesla’s driver from using the car’s automation system on certain roadways, were lacking and the combined effects of human error and the lack of sufficient system safeguards resulted in a fatal collision that should not have happened,” said Sumwalt.
That's from the Joshua Brown NTSB investigation findings.
Do you think the NTSB isn't going to reach many of the same findings in the death of Walter Huang? I mean, supposedly he'd experienced the same problem at the same intersection before when using A/P, and yet he still chose to put his life in the hands of A/P in the same place. If that isn't an example of automation complacency, what is? Then Tesla claimed that there'd been a couple hundred thousands cases of cars using A/P successfully negotiating that very same intersection, which was really dumb of them considering legal liability, especially once amateur video appeared of people duplicating the accident conditions and showing A/P having exactly the same problem dealing with a freeway gore (in fact, in at least one video, the very same gore) where Huang died.
Understand, I'm a huge fan of AVs being deployed as quickly as is safe, but I'm not a fan of any vehicle design which puts immature systems which are less safe than humans, and also less safe and effective than existing systems (e.g. touchscreens versus physical controls) into the public's hands, and until someone provides evidence that these systems actually are considerably more safe (at least overall), no one should be put at risk by them.