GRA said:
Oils4AsphaultOnly said:
Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?
Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a24511826/safety-features-automatic-braking-system-tested-explained/
It has nothing to do with threat detection of a crossing vehicle (requires path prediction).
A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.
And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).
The driver versus pilot training analogy isn't even applicable, since sleeping at the wheel isn't a training issue.
GRA said:
Oils4AsphaultOnly said:
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.
Tesla had its first
fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.
Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.
Waymo hasn't killed anyone, because it hasn't driven fast enough to do so. At 35mph, any non-pedestrian accidents would be non-fatal. Granted they've tackled the more difficult task of street driving, but their accident stats aren't directly comparable to Tesla's. I only brought them up to highlight the difference in scale of where their systems can be applied.
GRA said:
Oils4AsphaultOnly said:
One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).
Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
Funny. I wrote that to mean Tesla's method of iterating improvements and functionality into A/P, then NoA, and eventually FSD. You read it to mean Waymo's method of iterating from one geo-fenced city at a time.
Which just brings us all back to my old point of speed of deployment. Waymo's method would take YEARS (if not decades) to successfully deploy, and during that time, thousands of lives will be lost that could've been saved with a method that reaches FSD faster. At least 3 lives have been saved (all those DUI arrests) due to A/P so far, not counting any unreported ones where the driver made it home without being arrested. Eventually, you'll see things my way, you just don't know it yet. ;-)