GRA
Posts: 10868
Joined: Mon Sep 19, 2011 1:49 pm
Location: East side of San Francisco Bay

Re: Tesla's autopilot, on the road

Wed May 29, 2019 5:45 pm

Oils4AsphaultOnly wrote:
GRA wrote:
Oils4AsphaultOnly wrote:
Since GRA won't post this, I thought I'd do you both a favor and post it: https://insideevs.com/news/351110/consu ... pilot/amp/

Turns out, CR actually approves of NoA. They just don't like the version that does the lane-changes for you. Go figure. I'll still make my own judgements based upon the actual use of the product though, instead of hearsay.
If I'd seen the article (and IEVS' headline had accurately reflected CR's views), I'd have been happy to post it, but as I posted the link to and directly quoted from CR's own release, why bother to filter it through a particular forum? Your claim that CR approves of NoA is without foundation.
You're right. I'll eat crow here. "CR was fine with it otherwise", is not the same as "approving".
Oils4AsphaultOnly wrote:
GRA wrote: From an article on GCR dated May 24th: https://www2.greencarreports.com/news/1 ... orts-finds

The article goes on to quote David Friedman about Tesla's pre-public release testing regimen:
IOW, pretty much what the FAA failed to ensure Boeing did adequately in the case of the 737 Max, the difference being that in an airliner accident people die by the hundreds, while for cars the total per individual accident is much smaller, but the number of accidents is far greater.
Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
Oils4AsphaultOnly wrote:
GRA wrote: For an example of exactly the opposite approach to development testing compared to Tesla, and one which I obviously believe is necessary, see the following article. BTW, in a previous post you stated that there hadn't been any backlash owing to self-driving car accidents. I meant to reply at the time, but got distracted. In fact, as noted below there was a major backlash after the Herzberg death, and those where self-driving vehicles kill non-occupants are the ones that I'm worried will set back the development and deployment of AVs. The general public is far more worried about being put at risk by self-driving cars that they aren't in. Anyone who's riding in one has volunteered to act as a crash-test dummy for the company, so people aren't as concerned about those deaths as they are when an AV kills a non-occupant, potentially themselves: https://www.forbes.com/sites/alanohnsma ... 6c74e11124
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.

Tesla had its first fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.

Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.
Oils4AsphaultOnly wrote:One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
Guy [I have lots of experience designing/selling off-grid AE systems, some using EVs but don't own one. Local trips are by foot, bike and/or rapid transit].

The 'best' is the enemy of 'good enough'. Copper shot, not Silver bullets.

lorenfb
Posts: 2243
Joined: Tue Dec 17, 2013 10:53 pm
Delivery Date: 22 Nov 2013
Leaf Number: 416635
Location: SoCal

Re: Tesla's autopilot, on the road

Wed May 29, 2019 8:09 pm

Have to refer back to this A/P scenario:

https://youtu.be/YUnRTNdxMGk

How difficult is/was it for Tesla A/P system designers to write a OTA patch to avoid that? Surely the A/P knows when it enters an intersection
and should have the ability to differentiate between a spaced double-yellow and a single spaced white line. You would have thought
the A/P would have captured multiple such images over the many years the A/P has been in the on-the-road development process.
Hopefully, like the semi trailer repeat, this system failure doesn't re-occur and result in an accident/death the next time. Furthermore,
how does a QC department allow such a marginal product get released to production? By the way, does Tesla even have a QC department?

Totally incredible! Would Elon use this appropriate nomenclature, "a total FU"?
Last edited by lorenfb on Wed May 29, 2019 10:36 pm, edited 1 time in total.
#1 Leaf SL MY 9/13: 74K miles, 48 Ahrs, 5.2 miles/kWh (average), Hx=70, SOH=78, L2 - 100% > 1000, temp < 95F, (DOD) > 20 Ahrs
#2 Leaf SL MY 12/18: 4.5K miles, 115 Ahrs, 5.5 miles/kWh (average), Hx=98, SOH=99, DOD > 20%, temp < 105F

Oils4AsphaultOnly
Posts: 686
Joined: Sat Oct 10, 2015 4:09 pm
Delivery Date: 20 Nov 2016
Leaf Number: 313890
Location: Arcadia, CA

Re: Tesla's autopilot, on the road

Wed May 29, 2019 10:36 pm

GRA wrote:
Oils4AsphaultOnly wrote:
GRA wrote: From an article on GCR dated May 24th: https://www2.greencarreports.com/news/1 ... orts-finds

The article goes on to quote David Friedman about Tesla's pre-public release testing regimen:
IOW, pretty much what the FAA failed to ensure Boeing did adequately in the case of the 737 Max, the difference being that in an airliner accident people die by the hundreds, while for cars the total per individual accident is much smaller, but the number of accidents is far greater.
Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a ... explained/

It has nothing to do with threat detection of a crossing vehicle (requires path prediction).

A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.

And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).

The driver versus pilot training analogy isn't even applicable, since sleeping at the wheel isn't a training issue.
GRA wrote:
Oils4AsphaultOnly wrote:
GRA wrote: For an example of exactly the opposite approach to development testing compared to Tesla, and one which I obviously believe is necessary, see the following article. BTW, in a previous post you stated that there hadn't been any backlash owing to self-driving car accidents. I meant to reply at the time, but got distracted. In fact, as noted below there was a major backlash after the Herzberg death, and those where self-driving vehicles kill non-occupants are the ones that I'm worried will set back the development and deployment of AVs. The general public is far more worried about being put at risk by self-driving cars that they aren't in. Anyone who's riding in one has volunteered to act as a crash-test dummy for the company, so people aren't as concerned about those deaths as they are when an AV kills a non-occupant, potentially themselves: https://www.forbes.com/sites/alanohnsma ... 6c74e11124
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.

Tesla had its first fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.

Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.
Waymo hasn't killed anyone, because it hasn't driven fast enough to do so. At 35mph, any non-pedestrian accidents would be non-fatal. Granted they've tackled the more difficult task of street driving, but their accident stats aren't directly comparable to Tesla's. I only brought them up to highlight the difference in scale of where their systems can be applied.
GRA wrote:
Oils4AsphaultOnly wrote:One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
Funny. I wrote that to mean Tesla's method of iterating improvements and functionality into A/P, then NoA, and eventually FSD. You read it to mean Waymo's method of iterating from one geo-fenced city at a time.

Which just brings us all back to my old point of speed of deployment. Waymo's method would take YEARS (if not decades) to successfully deploy, and during that time, thousands of lives will be lost that could've been saved with a method that reaches FSD faster. At least 3 lives have been saved (all those DUI arrests) due to A/P so far, not counting any unreported ones where the driver made it home without being arrested. Eventually, you'll see things my way, you just don't know it yet. ;-)
:: Model 3 LR :: acquired 9 May '18
:: Leaf S30 :: build date: Sep '16 :: purchased: Nov '16
100% Zero transportation emissions (except when I walk) and loving it!

lorenfb
Posts: 2243
Joined: Tue Dec 17, 2013 10:53 pm
Delivery Date: 22 Nov 2013
Leaf Number: 416635
Location: SoCal

Re: Tesla's autopilot, on the road

Wed May 29, 2019 10:44 pm

Oils4AsphaultOnly wrote:
GRA wrote:
Oils4AsphaultOnly wrote: Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a ... explained/

It has nothing to do with threat detection of a crossing vehicle (requires path prediction).

A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.

And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).

The driver versus pilot training analogy isn't even applicable, since sleeping at the wheel isn't a training issue.
GRA wrote:
Oils4AsphaultOnly wrote: Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.

Tesla had its first fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.

Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.
Waymo hasn't killed anyone, because it hasn't driven fast enough to do so. At 35mph, any non-pedestrian accidents would be non-fatal. Granted they've tackled the more difficult task of street driving, but their accident stats aren't directly comparable to Tesla's. I only brought them up to highlight the difference in scale of where their systems can be applied.
GRA wrote:
Oils4AsphaultOnly wrote:One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
Funny. I wrote that to mean Tesla's method of iterating improvements and functionality into A/P, then NoA, and eventually FSD. You read it to mean Waymo's method of iterating from one geo-fenced city at a time.

Which just brings us all back to my old point of speed of deployment. Waymo's method would take YEARS (if not decades) to successfully deploy, and during that time, thousands of lives will be lost that could've been saved with a method that reaches FSD faster. At least 3 lives have been saved (all those DUI arrests) due to A/P so far, not counting any unreported ones where the driver made it home without being arrested. Eventually, you'll see things my way, you just don't know it yet. ;-)
Your and GRA's discussions about A/P statistics reach the ad nauseam level like over on the Toyota Mirai FCEV thread.
#1 Leaf SL MY 9/13: 74K miles, 48 Ahrs, 5.2 miles/kWh (average), Hx=70, SOH=78, L2 - 100% > 1000, temp < 95F, (DOD) > 20 Ahrs
#2 Leaf SL MY 12/18: 4.5K miles, 115 Ahrs, 5.5 miles/kWh (average), Hx=98, SOH=99, DOD > 20%, temp < 105F

Oils4AsphaultOnly
Posts: 686
Joined: Sat Oct 10, 2015 4:09 pm
Delivery Date: 20 Nov 2016
Leaf Number: 313890
Location: Arcadia, CA

Re: Tesla's autopilot, on the road

Wed May 29, 2019 11:55 pm

lorenfb wrote:
Your and GRA's discussions about A/P statistics reach the ad nauseam level like over on the Toyota Mirai FCEV thread.

per xkcd: https://xkcd.com/386/
Image
:: Model 3 LR :: acquired 9 May '18
:: Leaf S30 :: build date: Sep '16 :: purchased: Nov '16
100% Zero transportation emissions (except when I walk) and loving it!

GRA
Posts: 10868
Joined: Mon Sep 19, 2011 1:49 pm
Location: East side of San Francisco Bay

Re: Tesla's autopilot, on the road

Thu May 30, 2019 3:44 pm

Oils4AsphaultOnly wrote:
GRA wrote:International Business Times:
Tesla Autopilot Safety Issues Continue As EV Slams Into Another Car
https://www.ibtimes.com/tesla-autopilot ... ar-2795153

Stopped car on highway in lane, other car swerved into then out of lane, so known problem, but one we'll see occur increasingly often as the number of Teslas on the road increase. From that article there was also this which I hadn't heard about, but which we can expect to see more and more of if Tesla doesn't dial it back:
. . . In fact, Tesla recently agreed on a $13 million settlement with a former employee who was struck by the Model S while working. . . .
A more complete analysis of the accident in Norway is available in the original Forbes article, in which the Tesla owner credits A/P with saving his life (which may or may not be true, as the article's author points out):
May 26, 2019, 11:28am
Tesla On Autopilot Slams Into Stalled Car On Highway, Expect More Of This
https://www.forbes.com/sites/lanceeliot ... c07bdc4fe5
As I've told lorenfb, be careful about the FUD you read.

The $13 million lawsuit had nothing to do with A/P nor Tesla, other than it was a car driven by a Tesla contractor on Tesla's property: https://laist.com/2019/05/15/13_million ... actory.php
Okay, thanks. I was wondering why I hadn't heard of it until now.
Oils4AsphaultOnly wrote:As for the rate of A/P accidents, my claim on complacency re-curring seems to be bearing out. It's been 1 year since the last crash into a stalled vehicle, even though the number of Tesla autopilot capable vehicles have doubled.
Considerably less, actually. Prior to this one, the most recent I could find was last August 25th. I've been unable to confirm whether A/P was on or not in that one - the driver said he thought it was, but as he was arrested for DUI (see our previous discussion about whether or not people may be choosing to drive drunk because they have A/P), that might just be an excuse. There were at least three such crashes into stopped firetrucks where A/P was claimed to have been in use reported last year in the U.S., that one (in San Jose) plus one each in January (L.A.) and May (SLC, UT). See
WHY TESLA'S AUTOPILOT CAN'T SEE A STOPPED FIRETRUCK
https://www.wired.com/story/tesla-autop ... ash-radar/

Of course, there may be others we haven't heard about, here or in other countries. Anyway, if A/P was in use in all of these cases, January to May is 4 months, May to August is 3 months, August to May is nine months, for an average of 5 1/3rd months between such crashes. Not that we should draw major conclusions about frequency from such a small data set.
Guy [I have lots of experience designing/selling off-grid AE systems, some using EVs but don't own one. Local trips are by foot, bike and/or rapid transit].

The 'best' is the enemy of 'good enough'. Copper shot, not Silver bullets.

GRA
Posts: 10868
Joined: Mon Sep 19, 2011 1:49 pm
Location: East side of San Francisco Bay

Re: Tesla's autopilot, on the road

Thu May 30, 2019 5:39 pm

Oils4AsphaultOnly wrote:
GRA wrote:
Oils4AsphaultOnly wrote: Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a ... explained/

It has nothing to do with threat detection of a crossing vehicle (requires path prediction).

AEB systems can be capable of both crash avoidance and mitigation; avoidance is obviously preferred, mitigation is next best. For instance, CR from last November:
New Study Shows Automatic Braking Significantly Reduces Crashes and Injuries
https://www.consumerreports.org/automot ... ihs-study/
General Motors vehicles with forward collision warning (FCW) and automatic emergency braking (AEB) saw a big drop in police-reported front-to-rear crashes when compared with the same cars without those systems, according to a new report by the Insurance Institute for Highway Safety (IIHS).

Those crashes dropped 43 percent, the IIHS found, and injuries in the same type of crashes fell 64 percent. . . .

These findings were in line with previous findings by the IIHS. In earlier studies involving Acura, Fiat Chrysler, Honda, Mercedes-Benz, Subaru and Volvo vehicles, it found that the combination of FCW and AEB reduced front-to-rear crash rates by 50 percent for all crashes, and 56 percent for the same crashes with injuries.
As to crossing vehicles requiring path prediction, no, that's not necessary, although it's certainly helpful. As I pointed out previously, NHTSA found the issue with current AEBs in that situation is not one of target detection, it's classification. Current AEB radar systems are told to ignore braking for large, flat zero-doppler objects because they can be nothing more than highway signs on overpasses or off to the side on curves (or overpass supports, FTM); a human would recognize what they are and not brake for them, but current AEB systems aren't that smart. The Mobileye EyeQ visual system in use by Tesla and others at the time also made use of a library of objects, and the library didn't contain side views of such objects (apparently because that was beyond the capabilities of the system at the time).
Oils4AsphaultOnly wrote:A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.
As pointed out just above and previously, the reason current AEB systems don't work for either crossing or stopped vehicles is the same, a classification rather than detection issue. Lack of side skirts for detection isn't the problem, teaching the AEB to classify a crossing vehicle as a threat instead of ignoring it as harmless is. Here's the product spec sheet for one such radar (note the vertical FoV, ample to pick up the entire side of a trailer and then some at detection distances): https://www.bosch-mobility-solutions.co ... -(mrr).pdf
Oils4AsphaultOnly wrote:And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).
Are you suggesting that Teslas don't have the data to know which road they're on despite the lack of high-def digital mapping, when they can not only map out a route while choosing the type of roads to take and then follow that route, and they also know the speed limit of the different sections of that route? That's ridiculous. But let's say that you're right, and A/P is incapable of doing that. Since limiting the system's use only to those situations which it is capable of dealing with and preventing its usage in those which it can't is obviously the safest approach, should any company be required to adopt the latter approach to minimize the risk to both its customers and the general public? You consider Supercruise to be limited in where it can be used, and it is. To be specific, it's limited to ensure the safest possible performance, and I have no problem at all with that; indeed, I celebrate them for doing so, and wish Tesla acted likewise.
Oils4AsphaultOnly wrote:The driver versus pilot training analogy isn't even applicable, since sleeping at the wheel isn't a training issue.
Who was talking about sleeping at the wheel? Not I. I was talking about the lack of required initial training and testing in the system's capabilities and limitations as well as the lack of re-currency training; lacking those an autonomous system has to be idiot-proofed to a much higher level. We know that pilots, despite being a much more rigorously selected group than car buyers, still make mistakes due to misunderstanding automation system capabilities or through lack of practice, even though they are required to receive instruction and be tested on their knowledge, both initially and recurrently. As none of that is required of car buyers, you have to make it as hard as possible to misuse the system, which certainly includes preventing it from being used in situations outside of its capabilities.
Oils4AsphaultOnly wrote:
GRA wrote:
Oils4AsphaultOnly wrote: Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.

Tesla had its first fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.

Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.
Waymo hasn't killed anyone, because it hasn't driven fast enough to do so. At 35mph, any non-pedestrian accidents would be non-fatal. Granted they've tackled the more difficult task of street driving, but their accident stats aren't directly comparable to Tesla's. I only brought them up to highlight the difference in scale of where their systems can be applied.
Who says Waymo has only tested on public roads at slow speeds? I mentioned previously that while they were testing their ADAS systems (in 2012, before abandoning any such system as not being safer than a human), including on freeways, they observed exactly the same human misbehavior that A/P users have exhibited from the moment of its introduction up to the present. That included one employee fast asleep on the freeway. A correction, in my earlier reference I mis-remembered that the car had been going 65 for 1/2 hour. Checked my source, and I see it was 60 mph for 27 minutes, which is certainly fast enough to be fatal. They've continued testing on freeways since then, but have only deployed AV systems for public use where speeds are more limited (still with safety drivers, although that essentially serves as elephant repellent), precisely because they consider that it's necessary to walk before they run. I am wholly in favor of this approach.
Oils4AsphaultOnly wrote:
GRA wrote:
Oils4AsphaultOnly wrote:One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
Funny. I wrote that to mean Tesla's method of iterating improvements and functionality into A/P, then NoA, and eventually FSD. You read it to mean Waymo's method of iterating from one geo-fenced city at a time.

Which just brings us all back to my old point of speed of deployment. Waymo's method would take YEARS (if not decades) to successfully deploy, and during that time, thousands of lives will be lost that could've been saved with a method that reaches FSD faster. At least 3 lives have been saved (all those DUI arrests) due to A/P so far, not counting any unreported ones where the driver made it home without being arrested. Eventually, you'll see things my way, you just don't know it yet. ;-)
And that brings me back to my and CR's and every other safety organization's point, so I'll repeat it:
[David Friedman, former Acting NHTSA Administrator, now employed by CR] instead of treating the public like guinea pig[s], Tesla must clearly demonstrate a driving automation system that is substantially safer than what is available today, based on rigorous evidence that is transparently shared with regulators and consumers, and validated by independent third-parties. In the meantime, the company should focus on making sure that proven crash avoidance technologies on Tesla vehicles, such as automatic emergency braking with pedestrian detection, are as effective as possible.”
Tesla's claims of increased safety remain unverified. As more and more Teslas are out there and they get into more and more accidents, I imagine the costs of fighting all the A/P lawsuits as well as the resulting big payouts will force them to clean up their act, if regulators don't. Until they (and any other company making such claims) do that, it's so much hot air. As it is, their ADAS system's design is inherently less safe than what currently appears to be the best extant, Supercruise, and needs to be improved to bring it up to something approaching that level. Government regulation mandating minimum acceptable equipment/performance standards is needed in this area, much as it is in aviation e.g. RNP (Required Navigation Performance) or RVSM (Reduced Vertical Separation Minimum).

Aside from limiting ADAS usage to limited-access freeways until such time as Tesla (or any company) can show that their system is capable of safely expanding beyond them, they need to shorten the hands-off warning time, from 24 seconds down to something around Supercruise's 4 seconds (somewhere way uptopic, I said I thought anything over 3 seconds was excessive if you're serious about keeping drivers engaged, and would still like to see that). For comparison, Google used a 6 second warning time back in 2012 in their ADAS system, and as we know Tesla essentially didn't have one at all until after the Brown crash, and it remains far too long*. Also, since we know that steering wheel weight/torque sensors can be easily fooled and that people are in fact doing so, adding eye-tracking cameras and the appropriate computer/software, or equipment of which can be shown to be of equal or greater effectiveness in keeping drivers engaged, should be required. Personally, if I though it was safe and legal I'd be in favor of the "pay attention" warning being given by a small shock to the driver, but that's obviously not going to happen. Naturally, all such such systems must collect data and have it publicly accessible so that actual performance and safety benefits can be compared, so as to allow regulations to be improved and safety increased.

We've completed yet another argument cycle, so as you gave me the last word last round, you get the last word this one. I'm sure another round will start in the near future.

*One thing, I asked uptopic how it was possible for Brenner to engage A/P and be going 13 mph over the speed limit when A/P was supposed to have been modified to limit its use to no more than 5 mph over the speed limit. I never got an answer. ISTM that there are three possibilities, but this is one question where hands-on knowledge of current A/P is definitely valuable, and I lack that.

Anyway, can A/P be engaged even though it's traveling at a speed well above the speed limit + 5 mph, and it will then gradually slow to that speed? Given the short time span between engagement and Brenner's crash, that might explain how he was able to engage it and be going that fast at impact.

Or should it not have been possible to engage A/P while traveling so much over A/P's allowed speed (a far safer approach), but for some reason the system failed to work as designed?

Or has Tesla eliminated the speed limit + 5 mph limitation they added after Brown's crash, and I missed it?
Guy [I have lots of experience designing/selling off-grid AE systems, some using EVs but don't own one. Local trips are by foot, bike and/or rapid transit].

The 'best' is the enemy of 'good enough'. Copper shot, not Silver bullets.

Oils4AsphaultOnly
Posts: 686
Joined: Sat Oct 10, 2015 4:09 pm
Delivery Date: 20 Nov 2016
Leaf Number: 313890
Location: Arcadia, CA

Re: Tesla's autopilot, on the road

Fri May 31, 2019 2:31 pm

GRA wrote:
Oils4AsphaultOnly wrote:
GRA wrote:
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a ... explained/

It has nothing to do with threat detection of a crossing vehicle (requires path prediction).

AEB systems can be capable of both crash avoidance and mitigation; avoidance is obviously preferred, mitigation is next best. For instance, CR from last November:
New Study Shows Automatic Braking Significantly Reduces Crashes and Injuries
https://www.consumerreports.org/automot ... ihs-study/
General Motors vehicles with forward collision warning (FCW) and automatic emergency braking (AEB) saw a big drop in police-reported front-to-rear crashes when compared with the same cars without those systems, according to a new report by the Insurance Institute for Highway Safety (IIHS).

Those crashes dropped 43 percent, the IIHS found, and injuries in the same type of crashes fell 64 percent. . . .

These findings were in line with previous findings by the IIHS. In earlier studies involving Acura, Fiat Chrysler, Honda, Mercedes-Benz, Subaru and Volvo vehicles, it found that the combination of FCW and AEB reduced front-to-rear crash rates by 50 percent for all crashes, and 56 percent for the same crashes with injuries.
As to crossing vehicles requiring path prediction, no, that's not necessary, although it's certainly helpful. As I pointed out previously, NHTSA found the issue with current AEBs in that situation is not one of target detection, it's classification. Current AEB radar systems are told to ignore braking for large, flat zero-doppler objects because they can be nothing more than highway signs on overpasses or off to the side on curves (or overpass supports, FTM); a human would recognize what they are and not brake for them, but current AEB systems aren't that smart. The Mobileye EyeQ visual system in use by Tesla and others at the time also made use of a library of objects, and the library didn't contain side views of such objects (apparently because that was beyond the capabilities of the system at the time).
Oils4AsphaultOnly wrote:A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.
As pointed out just above and previously, the reason current AEB systems don't work for either crossing or stopped vehicles is the same, a classification rather than detection issue. Lack of side skirts for detection isn't the problem, teaching the AEB to classify a crossing vehicle as a threat instead of ignoring it as harmless is. Here's the product spec sheet for one such radar (note the vertical FoV, ample to pick up the entire side of a trailer and then some at detection distances): https://www.bosch-mobility-solutions.co ... -(mrr).pdf
Tesla split with Mobileye back in 2016. There are 2x as many cars that don't use it for object classification. Most Tesla's are currently using the Nvidia GPU and software to handle object detection AND classification, while everyone else relies on mobileye. Although the root cause might still be the same, you can't rely on GM's results and past NHTSA findings to determine what's the flaw that needs fixing with Teslas.

Going forward, thanks to the processing capabilities of their new "TPU", there will be a different software version that handles object detection and classification. Again, because it's not the same, results may vary, so its performance needs to be determined on its own.
GRA wrote:
Oils4AsphaultOnly wrote:And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).
Are you suggesting that Teslas don't have the data to know which road they're on despite the lack of high-def digital mapping, when they can not only map out a route while choosing the type of roads to take and then follow that route, and they also know the speed limit of the different sections of that route? That's ridiculous. But let's say that you're right, and A/P is incapable of doing that. Since limiting the system's use only to those situations which it is capable of dealing with and preventing its usage in those which it can't is obviously the safest approach, should any company be required to adopt the latter approach to minimize the risk to both its customers and the general public? You consider Supercruise to be limited in where it can be used, and it is. To be specific, it's limited to ensure the safest possible performance, and I have no problem at all with that; indeed, I celebrate them for doing so, and wish Tesla acted likewise.
No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).

For example, reading of the speed limit signs was a teaching task of presenting the machine with thousands of pictures of what a 35mph speed limit sign looked like, and the neural net would devise the code for interpreting the camera data (streams of rgb values on a point grid) to ferret out the speed limit. There is no image classification for "street" versus "highway" versus "freeway". So telling A/P not to engage on the road its driving entails oversight code that doesn't exist. NoA, on the other hand, does seem to have some sort of oversight code, because it actively warns about leaving the freeway and disengages itself near the head of an off-ramp. After which, A/P takes over to handle staying within the lane and keeping distance from the car ahead.

GRA wrote:
We've completed yet another argument cycle, so as you gave me the last word last round, you get the last word this one. I'm sure another round will start in the near future.
I've already said my piece. We disagreed. There's no last word to be had.
GRA wrote: *One thing, I asked uptopic how it was possible for Brenner to engage A/P and be going 13 mph over the speed limit when A/P was supposed to have been modified to limit its use to no more than 5 mph over the speed limit. I never got an answer. ISTM that there are three possibilities, but this is one question where hands-on knowledge of current A/P is definitely valuable, and I lack that.

Anyway, can A/P be engaged even though it's traveling at a speed well above the speed limit + 5 mph, and it will then gradually slow to that speed? Given the short time span between engagement and Brenner's crash, that might explain how he was able to engage it and be going that fast at impact.

Or should it not have been possible to engage A/P while traveling so much over A/P's allowed speed (a far safer approach), but for some reason the system failed to work as designed?

Or has Tesla eliminated the speed limit + 5 mph limitation they added after Brown's crash, and I missed it?
The A/P speed limit default is set to the posted speed limit. The "default" can be adjusted from -20mph all the way up to +30mph over the posted speed limit via the menu. Its recommended setting is no more than +5mph over. I've never tried setting the default to anything above 5mph, so I don't know if A/P would even accept it. BUT, in addition to changing the default, the driver can also override the A/P speed AFTER A/P has been enabled. The driver is always ultimately in control and has final say over everything (even with Obstacle Acceleration Limit set). The absolute max A/P speed is 90mph (TX has some fairly high speed limits).

And yes, you can engage A/P at any speed all the way up to the absolute max of 90mph. A/P's job is to keep within the lane lines and within a relative space from the moving vehicle ahead. If you engage A/P, while exceeding the configured default speed (but still under 90mph), it will hold that speed until something changes it (posted speed limit change, traffic). If the A/P default speed limit configuration is set to absolute mode instead of relative, I believe it would NOT adjust the speed, but haven't confirmed. All of this _could_ change with the next A/P update, but not likely to.

Since they didn't say that Brenner raised the A/P speed limit, he was probably already traveling at 68mph when he engaged A/P and never adjusted it until the crash.
:: Model 3 LR :: acquired 9 May '18
:: Leaf S30 :: build date: Sep '16 :: purchased: Nov '16
100% Zero transportation emissions (except when I walk) and loving it!

lorenfb
Posts: 2243
Joined: Tue Dec 17, 2013 10:53 pm
Delivery Date: 22 Nov 2013
Leaf Number: 416635
Location: SoCal

Re: Tesla's autopilot, on the road

Fri May 31, 2019 4:21 pm

Oils4AsphaultOnly wrote:No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).
Yes, but an A/P oversight system running in parallel could disable A/P under certain conditions. Sorry Tesla, even level 5 (FSD) will need
an oversight/override system (fail-safe system).
Oils4AsphaultOnly wrote:
For example, reading of the speed limit signs was a teaching task of presenting the machine with thousands of pictures of what a 35mph speed limit sign looked like, and the neural net would devise the code for interpreting the camera data (streams of rgb values on a point grid) to ferret out the speed limit. There is no image classification for "street" versus "highway" versus "freeway". So telling A/P not to engage on the road its driving entails oversight code that doesn't exist. NoA, on the other hand, does seem to have some sort of oversight code, because it actively warns about leaving the freeway and disengages itself near the head of an off-ramp. After which, A/P takes over to handle staying within the lane and keeping distance from the car ahead.
A key ongoing Tesla A/P shortcoming, to the detriment of A/P users! Again, the arrogance/naivete of Tesla to not have resolved this
is incredible, given the so-called years of A/P "advancements". Another problem for the NTSB to address besides the Boeing 737
autopilot safety problem?
#1 Leaf SL MY 9/13: 74K miles, 48 Ahrs, 5.2 miles/kWh (average), Hx=70, SOH=78, L2 - 100% > 1000, temp < 95F, (DOD) > 20 Ahrs
#2 Leaf SL MY 12/18: 4.5K miles, 115 Ahrs, 5.5 miles/kWh (average), Hx=98, SOH=99, DOD > 20%, temp < 105F

Oils4AsphaultOnly
Posts: 686
Joined: Sat Oct 10, 2015 4:09 pm
Delivery Date: 20 Nov 2016
Leaf Number: 313890
Location: Arcadia, CA

Re: Tesla's autopilot, on the road

Sat Jun 01, 2019 10:58 am

lorenfb wrote:
Oils4AsphaultOnly wrote:No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).
Yes, but an A/P oversight system running in parallel could disable A/P under certain conditions. Sorry Tesla, even level 5 (FSD) will need
an oversight/override system (fail-safe system).
That is flat-out suicidal. You don't hand the computer full control and then yank it away under "certain conditions". You can do that with ADAS, because the human driver is supposed to be in control, but in level 5, where the passengers can be asleep?! You are reckless with such an ignorant idea.
lorenfb wrote:
Oils4AsphaultOnly wrote:
For example, reading of the speed limit signs was a teaching task of presenting the machine with thousands of pictures of what a 35mph speed limit sign looked like, and the neural net would devise the code for interpreting the camera data (streams of rgb values on a point grid) to ferret out the speed limit. There is no image classification for "street" versus "highway" versus "freeway". So telling A/P not to engage on the road its driving entails oversight code that doesn't exist. NoA, on the other hand, does seem to have some sort of oversight code, because it actively warns about leaving the freeway and disengages itself near the head of an off-ramp. After which, A/P takes over to handle staying within the lane and keeping distance from the car ahead.
A key ongoing Tesla A/P shortcoming, to the detriment of A/P users! Again, the arrogance/naivete of Tesla to not have resolved this
is incredible, given the so-called years of A/P "advancements". Another problem for the NTSB to address besides the Boeing 737
autopilot safety problem?
I thought you were nauseous about the A/P discussion? Anyway, I'm going to ignore your rant, because I have nothing polite nor constructive to add.
:: Model 3 LR :: acquired 9 May '18
:: Leaf S30 :: build date: Sep '16 :: purchased: Nov '16
100% Zero transportation emissions (except when I walk) and loving it!

Return to “Off-Topic”