The side skirt
In both the Brown and Brenner case, if either the driver paid attention or the truck had a side skirt, neither would be dead.
A side skirt isn't going to stop a 4,800 lb. car traveling at highway speed, or much else for that matter. They're aero mods, not underun protection. Side underrun guards are designed to stop pedestrians, bikes and motorcyclists, not cars. And of course, even if there were underrun guards that could
have stopped either car, the drivers would almost certainly be dead anyway; NHTSA frontal crash test is at 35 mph, and the IIHS uses 40. Brown was doing 74 mph, and Brenner 68. Drivers not paying attention is exactly the problem (see below).
won't stop an uncontrolled car, but it would've triggered AEB
, which would've reduced the speed of the crash and increased his chance of survival.
No, it wouldn't. The problem isn't the lack of a radar-significant target (the flat side of a broadside-on trailer is about as radar-significant as it gets), it's that current AEBs (not just Tesla's) aren't able to correctly characterize a non-moving or zero-doppler target (like a crossing vehicle) as threats. To date, all four fatal A/P crashes (3 in the U.S., one in China) as well as several other A/P accidents (I think the total's currently five firetrucks, plus the road sweeper in China) involved AEB's failure to recognize stationary (such as the gore barrier in the Huang case, or stopped vehicles) or crossing targets and respond, i.e.
From the NTSB report on the Brown crash:
Current Level 2 vehicle automation technologies cannot reliably identify and respond to crossing vehicle traffic. NHTSA’s ODI report on the Tesla Models S and X, which was prompted by the Williston crash, states: “None of the companies contacted by ODI indicated that AEB systems used in their products through MY 2016 production were designed to brake for crossing path collisions” (NHTSA 2017, p. 3). As part of its defect investigation, NHTSA conducted a series of test-track-based AEB evaluations on the Tesla Model S, as well as a peer vehicle system. 55 The testing confirmed that the Tesla AEB system avoided crashes for the majority of rear-end scenarios, and its TACC generally provided enough braking to avoid rear-end crash scenarios; but neither test vehicle effectively responded to “target vehicles” in straight crossing path or left turn across path scenarios.
The Brenner crash shows that Tesla's AEB still can't do so.
<Snip detailed account of numerous NoA safety deficiencies>
Oils4AsphaultOnly wrote: GRA wrote:
To you, does this represent an adequate level of development testing before release to the public, when failure can mean injury or death? If so, I guess you'd be fine flying on the 737 Max, pre-fix. It sure as hell doesn't to me. You can find similar posts on TMC this week pointing to various A/P/NoA safety shortcomings still, and all this over 3.5 years after A/P was first put into the public's hands. and none of this gets around the issue of driver disengagement, which is inevitable (see below). Here's what Waymo wrote at the time A/P was introduced, 3 years or so after stopping their own driving assistance program (it was already working better and had more safeguards than A/P when the latter was introduced):
https://medium.com/waymo/why-were-aimin ... d4d6e227e1
Why we’re aiming for fully self-driving vehicles
Note that the volunteers said they were less stressed and more rested, just as you have said you are. But they also disengaged from driving even though they were told they must not for safety reasons, and signed a form promising to pay attention.
Here's the NHTSA study referred to in the Waymo article (the one where it says that the mean time
for drivers to regain control of the L2 vehicle was 17 seconds, i.e. Brown, Huang and Brenner would be, like Francisco Franco, still dead):
https://www.nhtsa.gov/sites/nhtsa.dot.g ... ncepts.pdf
Human Factors Evaluation of
Level 2 and Level 3 Automated
From the summary:
. . . Overall, participants greatly trusted
the capabilities of the automated systems. Although this trust is essential for widespread
adoption, participants were also observed prioritizing non-driving activities over the operation of
the vehicle and disregarding TORs when they were presented. . . .
I've been using A/P 2 for a year, and NoA for over a month. I don't have the A/P 1 reference, but I am happy with A/P 2. I know that some of the issues the other driver (perhaps she was part of the early access program
?) encountered has surely been fixed by the time it got to me, because NoA waited and continued signaling in my case until the lane was clear.
Nope, regular albeit very well informed and technically competent customer. You didn't answer my question, does this represent an acceptable level of pre-public release development testing to you?
Oils4AsphaultOnly wrote:You keep harping about the risks, but fail to acknowledge that as it is now, A/P has saved the lives of some of those drunk/sleeping drivers. The accident rate of drivers who fall asleep must be pretty close to 100%, with death rates being a portion of that. The unassisted drunk/sleeping drivers are also most likely to involve some innocent 3rd party. The system is saving lives, DESPITE being abused.
To repeat, if your claim is correct Tesla will have no problem turning the data and their methodology over for an independent review. Again, it's up to them to prove their claims, and if the data is so conclusive there's every advantage for them in doing it. Instead, they've resisted all calls to do so. BTW, for all we know the reason some of those drunk/sleeping drivers decided to drive anyway was because they thought A/P would cover them. Without interviewing them, we just don't know if that was a factor in their decision or not.
Oils4AsphaultOnly wrote:I'm not going to discuss the difference between waymo's method and Tesla's method of gathering training data, because that's another can of worms that we'll probably spend pages on. Just going to say that the results of one system does not translate to the other.
Human behavior does directly translate, and exactly the same human behavior was recorded in Google's tests as has been recorded in numerous internet videos of Tesla owners, even ignoring the more extreme stupid human tricks (having sex, riding in the back seat with no one in front, sleeping). Or are you saying that Tesla only sells cars to superior humans, all evidence to the contrary? In both this and the "Autonomous Vehicles, LEAF and Others" topics, I've posted numerous links over the years to scientifically-conducted studies, dating from 30+ years back to currently, on human behavior when dealing with automated control systems. While they may vary somewhat in their methodologies and scopes, every single study, bar none, has shown that
1. Most human operators will trust autonomous systems well before they have achieved sufficient capability and reliability to be safer than humans,
2. As a result of the above, they will allow themselves to be distracted and will mentally and physically disengage, and
3. To resume control after such disengagement and take the correct
action often requires a prolonged period (of Observation, Orientation, Decision and Action, the OODA loop) of many seconds, which is far too long in an emergency situation.
I know you consider yourself one of the people who never lets themselves get distracted while using A/P or a similar self-driving system, and maybe you actually are such a person (there are a few), although self-assessments tend to rate one's capabilities far higher than is justified - in numerous surveys 70+% of drivers rated their driving skills as "Above Average or better"? See "Illusory superiority". Aside from the statistical impossibility, at least one such group was surveyed because all of them had just been found at-fault in an accident. So, even if YOU are superior, most of the people using these systems aren't, and will act as indicated in points 1-3 above. Feel free to provide a link to a peer-reviewed study that found otherwise.
Oils4AsphaultOnly wrote:The only thing we agree on is that the data on accident/death rates is important. I've already pointed out that Tesla's data on number of accidents for A/P driven versus non-A/P (but still tesla cars) driven miles is especially significant, because it's self-consistent. Perhaps a request for their report to distinguish between highway and non-highway non-A/P accidents per miles driven to make the comparison more direct? Any comparisons with NHTSA statistics should be done strictly with NHTSA data, otherwise, they're not comparable.
Again, Tesla needs to prove their claims by providing the data and methodology to others, because they aren't a disinterested party. NHTSA's FARS database tracks all fatal accidents: https://www.nhtsa.gov/research-data/fat ... ystem-fars
Tesla's got as much reason to claim that A/P is safer as VW did to claim that their diesels met emission tests, so Tesla needs to put up or shut up. If Tesla continues to fight the lawsuit by Huang's family instead of settling it out of court, the info may get released despite them, because in pre-trial discovery the plaintiffs will likely (and should) insist on getting that info, given Tesla's public claims.