Tesla's autopilot, on the road

My Nissan Leaf Forum

Help Support My Nissan Leaf Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
cwerdna said:
NTSB: Autopilot was in use before Tesla hit semitrailer
Even better is the electrek article, which includes whole 2 page PDF that is the NTSB prelimary report: https://electrek.co/2019/05/16/tesla-autopilot-fatal-crash-truck-investigation-preliminary-report/

The NTSB wrote:
The driver engaged the Autopilot about 10 seconds before the collision. From less than 8 seconds before the crash to the time of impact, the vehicle did not detect the driver’s hands on the steering wheel.
 
FWIW, an interesting comment (posted by user DML at electrek):
At 68 mph the car traveled about 330 feet in the 10 seconds that the autopilot was engaged. At 68 mph it should take about 180 feet to stop the car add another 60 feet for a human response time and you get about 240 feet to stand still. An engaged driver might have avoided the accident or at the very lease reduced its severity.

I am not a fan of any system or device that makes a driver think that they do not need to pay attention while driving.
 
jlv said:
FWIW, an interesting comment (posted by user DML at electrek):
At 68 mph the car traveled about 330 feet in the 10 seconds that the autopilot was engaged. At 68 mph it should take about 180 feet to stop the car add another 60 feet for a human response time and you get about 240 feet to stand still. An engaged driver might have avoided the accident or at the very lease reduced its severity.

I am not a fan of any system or device that makes a driver think that they do not need to pay attention while driving.

Yes, and this is what GRA points to about enabling complacency. It's also what I'd point to as abuse of A/P.

Drivers who take unnecessary risks (drunk, sex, or lazy), will do so even without A/P. A/P is not meant to solve those driver issues. And note that in none of the A/P accidents has anyone else been killed other than the driver.
 
Someone found the location of the Jeremy Banner accident: https://teslamotorsclub.com/tmc/posts/3664370/

Note the intersection? Not the place to not pay attention!
 
jlv said:
FWIW, an interesting comment (posted by user DML at electrek):
At 68 mph the car traveled about 330 feet in the 10 seconds that the autopilot was engaged. At 68 mph it should take about 180 feet to stop the car add another 60 feet for a human response time and you get about 240 feet to stand still. An engaged driver might have avoided the accident or at the very lease reduced its severity.

I am not a fan of any system or device that makes a driver think that they do not need to pay attention while driving.


if this statement is true then the driver needed to start braking less than 2 seconds after he engaged autopilot. The need to brake should have already been apparent to the driver at this point baring any visibility issues or turns or hills or something. so sad.
 
Direct link to the NTSB preliminary report is here: https://www.ntsb.gov/investigations/AccidentReports/Pages/HWY19FH008-preliminary-report.aspx

As expected, this crash was virtually identical to the Brown one almost 3 years previous. The NTSB Chairman's comments summing up that accident could be copied verbatim for this one:
“While automation in highway transportation has the potential to save tens of thousands of lives, until that potential is fully realized, people still need to safely drive their vehicles,” said NTSB Chairman Robert L. Sumwalt III. “Smart people around the world are hard at work to automate driving, but systems available to consumers today, like Tesla’s ‘Autopilot’ system, are designed to assist drivers with specific tasks in limited environments. These systems require the driver to pay attention all the time and to be able to take over immediately when something goes wrong. System safeguards, that should have prevented the Tesla’s driver from using the car’s automation system on certain roadways, were lacking and the combined effects of human error and the lack of sufficient system safeguards resulted in a fatal collision that should not have happened,” said Sumwalt.

BTW, the speed limit on the road is 55 yet the Model 3 was doing 68. A/P was supposedly modified after the Brown crash to limit set speed to no more than 5 mph over* the speed limit, so how was it possible to even engage it in this case? Meanwhile, lots of current reports of safety glitches with the most recent version, 2019.12.1.2, on TMC.


*Itself a safety flaw, as one of the ways that AVs will make driving safer is that unlike humans, they'll obey all traffic regulations. Of course, many speed limits are set below the road's design speed, so once AVs have become the majority we'll likely see speed limits raised, or at least adjusted in real-time to account for changing conditions.
 
Oils4AsphaultOnly said:
jlv said:
FWIW, an interesting comment (posted by user DML at electrek):
At 68 mph the car traveled about 330 feet in the 10 seconds that the autopilot was engaged. At 68 mph it should take about 180 feet to stop the car add another 60 feet for a human response time and you get about 240 feet to stand still. An engaged driver might have avoided the accident or at the very lease reduced its severity.

I am not a fan of any system or device that makes a driver think that they do not need to pay attention while driving.

Yes, and this is what GRA points to about enabling complacency. It's also what I'd point to as abuse of A/P.

Drivers who take unnecessary risks (drunk, sex, or lazy), will do so even without A/P. A/P is not meant to solve those driver issues. And note that in none of the A/P accidents has anyone else been killed other than the driver.
The lack of non-occupant injuries or fatalities to date is a matter of chance. In the Brown crash, either due to good zoning regs or luck, the gas station was on the near rather than the far corner of the intersection. If that hadn't been the case, after under-running the trailer Brown's Tesla would have passed right through the fuel pumps and/or the convenience store instead of an open field. The fact that neither of the drivers were injured in either of the semi crashes was due to the Tesla hitting the trailer pretty much dead center, and Brenner's car fortunately stopped in the median instead of crossing over into oncoming traffic.

In the Huang case, two other cars were struck by his Model X (or parts thereof), but fortunately there were no injuries in one and only minor injuries in the other car. And of course, the same behavior is still occurring almost a year later:
Dashcam video shows Tesla steering toward lane divider—again
Tesla Dashcam video highlights weakness of Tesla's testing regime.

TIMOTHY B. LEE - 3/22/2019, 6:02 AM
https://arstechnica.com/cars/2019/03/dashcam-video-shows-tesla-steering-toward-lane-divider-again/
 
GRA said:
Oils4AsphaultOnly said:
jlv said:
FWIW, an interesting comment (posted by user DML at electrek):

Yes, and this is what GRA points to about enabling complacency. It's also what I'd point to as abuse of A/P.

Drivers who take unnecessary risks (drunk, sex, or lazy), will do so even without A/P. A/P is not meant to solve those driver issues. And note that in none of the A/P accidents has anyone else been killed other than the driver.
The lack of non-occupant injuries or fatalities to date is a matter of chance. In the Brown crash, either due to good zoning regs or luck, the gas station was on the near rather than the far corner of the intersection. If that hadn't been the case, after under-running the trailer Brown's Tesla would have passed right through the fuel pumps and/or the convenience store instead of an open field. The fact that neither of the drivers were injured in either of the semi crashes was due to the Tesla hitting the trailer pretty much dead center, and Brenner's car fortunately stopped in the median instead of crossing over into oncoming traffic.

In the Huang case, two other cars were struck by his Model X (or parts thereof), but fortunately there were no injuries in one and only minor injuries in the other car. And of course, the same behavior is still occurring almost a year later:
Dashcam video shows Tesla steering toward lane divider—again
Tesla Dashcam video highlights weakness of Tesla's testing regime.

TIMOTHY B. LEE - 3/22/2019, 6:02 AM
https://arstechnica.com/cars/2019/03/dashcam-video-shows-tesla-steering-toward-lane-divider-again/

In both the Brown and Brenner case, if either the driver paid attention or the truck had a side skirt, neither would be dead.

The Timothy Lee report was for A/P, not NoA (Navigate on Autopilot) - which would've made the decision of which "lane" to stay in. It's NoA that would've saved Walter Huang's life.
 
Oils4AsphaultOnly said:
It's NoA that would've saved Walter Huang's life.
Perhaps "would've" => "might have". I'm personally not so sure NoA would have made a difference in following the lines there.
 
jlv said:
Oils4AsphaultOnly said:
It's NoA that would've saved Walter Huang's life.
Perhaps "would've" => "might have". I'm personally not so sure NoA would have made a difference in following the lines there.

NoA doesn't just stay within the lane lines, it follows the map. When you trigger NoA, the "focus" (blue lines) of A/P switches from the lane lines to the middle of the lane, indicating the intended path it will follow (as dicated by the navigation system). It still doesn't recognize lane barriers as barriers, only that it's no longer in the intended path, so it would veer left/right before reaching the point where it could get confused by the center being "a lane". That "decision point" that I mentioned in the past. Because it's still NOT full self driving (FSD), although a sleeping Walter wouldn't have crashed into the center barrier, he could've still died from crashing into a firetruck that was stopped in the left lane.

I know this sounds like very little improvement, but for an ADAS, it's huge!
 
The NTSB wrote:
The driver engaged the Autopilot about 10 seconds before the collision. From less than 8 seconds before the crash to the time of impact, the vehicle did not detect the driver’s hands on the steering wheel.

Sometimes I wonder if this is 'suicide by Tesla'. Similar to the many 'suicide by cop' that we see on the news.
 
Oils4AsphaultOnly said:
In both the Brown and Brenner case, if either the driver paid attention or the truck had a side skirt, neither would be dead.
A side skirt isn't going to stop a 4,800 lb. car traveling at highway speed, or much else for that matter. They're aero mods, not underun protection. Side underrun guards are designed to stop pedestrians, bikes and motorcyclists, not cars. And of course, even if there were underrun guards that could have stopped either car, the drivers would almost certainly be dead anyway; NHTSA frontal crash test is at 35 mph, and the IIHS uses 40. Brown was doing 74 mph, and Brenner 68. Drivers not paying attention is exactly the problem (see below).

Oils4AsphaultOnly said:
The Timothy Lee report was for A/P, not NoA (Navigate on Autopilot) - which would've made the decision of which "lane" to stay in. It's NoA that would've saved Walter Huang's life.
Assuming it worked correctly, and there's plenty of reports of that not being the case. Let's take a look at how well it worked when introduced 3 years after A/P was first available to the public, by a long-time TMC poster and multiple Tesla owner who's far more aware of the capabilities and limitations of A/P etc. than the typical buyer - this is from last November: https://teslamotorsclub.com/tmc/threads/navigate-on-autopilot-is-useless-2018-42-3.134137/

TLDR version: This is the most useless thing I've ever seen. I've seen some whoppers, but this takes the cake.

Let's do a rundown of what I think was improved:

Autosteer in highway interchanges and off-ramps was improved. It would stay in the ramp without too much trouble, while prior it would freak out and demand the driver intervene for sharper curves. (We'll ignore that it was taking the turns at ~15 MPH lower than the suggested speeds, but baby steps I suppose).

Visual indication of what travel lane was needed for upcoming interchanges was reasonable and a good addition to normal navigation.
I do like the path visualization when lane changes are initiated.

It does usually try to take exits without intervention (more on this later) which is a step in the right direction for on-ramp to off-ramp autopilot.

So, some improvements I suppose.

Now for the bad.

I now fully understand why Tesla makes it require confirmation. If it had been allowed to make the suggested lane changes on its own without confirmation, I'd likely have died 10-20x if I didn't take control every time.

AP1 and AP2 previously did *okay* when following a lane that ended and gradually merged into a single lane. While using NavOnAP this weekend, the car just wanted to make its own lane instead every time instead of merging... usually trying to run into a barrier or median, requiring intervention every time.

The car regularly suggested lane changes directly into objects it clearly detected. It would even show the proposed path on the visualization as going directly through the other vehicle. In one instance I wondered if it really was going to let me change lanes into a semi truck, or if it would wait until it was clear. Nope, it started to move right towards it after confirmation. No red lane, nothing, while directly along side a semi. *shakes head*

NavOnAP has no concept of "Keep Right, Pass Left". It never suggests lane changes back to the right in any of the available modes.
Further, it randomly suggests lane changes to the left for no reason whatsoever. No traffic, no interchanges, nothing.

I found the car randomly decelerating at least 10x during the trip with no obvious cause. More common when driving in the right lane vs left. It would also set a seemingly random max speed at times, with no speed limit changes or interchanges.

AP2 still doesn't read speed limit signs, so the noted speed limit doesn't always match the real highway speed limit in areas where it was recently upped or lowered (happens a lot around here with places bumping to 70).

At least once the car detected a construction zone with a popup about it (kudos on that) and then immediately proceeded to try and suggest a lane change into construction cones..... which negates this from making the "improvements" list above.

Overtake suggestions are useless. On two lanes, driving in the right lane, I would approach a vehicle ahead that was traveling more slowly. No other traffic. The car would decelerate... 5.... 10.... 15 MPH.... as it sees the vehicle. Then, after matching its speed at my set following distance, a few seconds later it'd popup "Confirm lane change" to overtake. Seriously, wtf. And not just once in a while. Every single time I waited for the suggested change, it behaved this way. In every mode setting, including "Mad Max".

The car detects the other vehicle way in advance, even when just using the in-car visualization for reference, and could easily make the suggested lane change early enough so that no deceleration at all would be needed, even with the delay of requiring confirmation.

On multiple occasions the car would start doing a lane change (either a confirmed one, a manually initiated one, or an automatic one for an exit), get part way through, and quickly veer back into the starting lane for no reason. About half of those times it would popup with "Lane change cancelled". In one instance I actually missed an exit because it was 2/3's into the exit ramp lane, stayed there a moment, then just jumped back to the left for no reason.... ugh.

Even features that were usable before, like manually initiated auto lane changes, are no longer reliable.

Overall, using "Navigate on Autopilot" did not improve the experience of using Autopilot at all, with the limited exception of autosteer's new ability to mostly keep in lane on a tight interchange... with that being negated by the fact that it tries to kill you any time a lane ends. Also, it seems that the ability to take tight interchanges is mostly thanks to nav fusion, as the vision model does not appear to be properly detecting lanes in some of these situations, yet the system presses onward.

The suggested lane changes were completely useless on every mode. It would either suggest changes that weren't necessary, weren't safe, or weren't useful. It was even suggesting lane changes for an interchange upwards of 8 miles away at one point, then refusing to suggest overtake lane changes until after that interchange.

Some more notes:

Vehicle detection to the sides and behind your vehicle is complete garbage.

This is super obvious when sitting still with other still vehicles all around. You'll seem them "swimming" around the visualization, colliding with each other, with you, etc.

Also obvious when overtaking large vehicles. Almost every single semi truck, bus, or RV I passed ended up with a twin ghost visual on the screen.

Finally, vehicles to the side are regularly shown overlapping my own vehicle visual, despite them being firmly in their own lane.

Vehicles behind your vehicle are actually detected only part of the time, apparently due to some issue with the rear cam setup in the hardware (@verygreen I believe has documented this).

It seems very obvious that Tesla has no real data fusion whatsoever between the cameras. This results in both huge gaps in the usable data as well duplicate data (like the ghost trucks). This is like computer vision 101 stuff that I don't understand why Tesla hasn't overcome this, especially in something shipped to thousands of customers.

Radar/vision fusion on AP2 appears to be significantly worse than AP1, with AP1 easily accurate for a few cm... AP2 easily worse than +/- 1m... very obvious when looking at the lead vehicle visualization.

Some of the failings of NavOnAP don't even make sense. If it clearly "sees" a vehicle, it seems like a basic sanity check in the higher level code would prevent it from suggesting a lane change into it.... but this isn't what happens.

Could probably go on for quite a while, but suffice it to say I won't be using the feature any further... not at least until it's actually useful.

It doesn't improve the experience of using autopilot for me one bit. In fact, it makes it even more frustrating. This is ignoring the super frequent nags that plague the more recent firmwares, too.

I'll be sticking to my AP1 vehicles for longer trips from now on I think. In fact, I'm probably going to try and make time to make some videos/posts about AP1/AP2 modifications that are actually useful.

For example, my modded AP1 vehicle would handle the situation I noted above (overtaking a vehicle) smoothly with zero deceleration. AP1 (and AP2) can detect a vehicle ahead of you over 100m away... no excuse for the behavior of NavOnAP.

I'm just super disappointed in Tesla. Their spat with Mobileye has cost Tesla customers a huge amount of progress on the autopilot front. AP1 owners are completely screwed because they will get zero improvements. (Despite promises of ongoing improvements, AP1 hasn't had a single improvement in about two years). Meanwhile, AP1 is running on Mobileye hardware that was released nearly 5 years ago and still handles many situations better than AP2. . . .
To you, does this represent an adequate level of development testing before release to the public, when failure can mean injury or death? If so, I guess you'd be fine flying on the 737 Max, pre-fix. It sure as hell doesn't to me. You can find similar posts on TMC this week pointing to various A/P/NoA safety shortcomings still, and all this over 3.5 years after A/P was first put into the public's hands. and none of this gets around the issue of driver disengagement, which is inevitable (see below). Here's what Waymo wrote at the time A/P was introduced, 3 years or so after stopping their own driving assistance program (it was already working better and had more safeguards than A/P when the latter was introduced):
Why we’re aiming for fully self-driving vehicles
https://medium.com/waymo/why-were-aiming-for-fully-self-driving-vehicles-c8d4d6e227e1

Note that the volunteers said they were less stressed and more rested, just as you have said you are. But they also disengaged from driving even though they were told they must not for safety reasons, and signed a form promising to pay attention.

Here's the NHTSA study referred to in the Waymo article (the one where it says that the mean time for drivers to regain control of the L2 vehicle was 17 seconds, i.e. Brown, Huang and Brenner would be, like Francisco Franco, still dead):
Human Factors Evaluation of
Level 2 and Level 3 Automated
Driving Concepts
https://www.nhtsa.gov/sites/nhtsa.d...umanfactorseval-l2l3-automdrivingconcepts.pdf

From the summary:
. . . Overall, participants greatly trusted
the capabilities of the automated systems. Although this trust is essential for widespread
adoption, participants were also observed prioritizing non-driving activities over the operation of
the vehicle
and disregarding TORs when they were presented. . . .
 
GRA said:
Oils4AsphaultOnly said:
In both the Brown and Brenner case, if either the driver paid attention or the truck had a side skirt, neither would be dead.
A side skirt isn't going to stop a 4,800 lb. car traveling at highway speed, or much else for that matter. They're aero mods, not underun protection. Side underrun guards are designed to stop pedestrians, bikes and motorcyclists, not cars. And of course, even if there were underrun guards that could have stopped either car, the drivers would almost certainly be dead anyway; NHTSA frontal crash test is at 35 mph, and the IIHS uses 40. Brown was doing 74 mph, and Brenner 68. Drivers not paying attention is exactly the problem (see below).

The side skirt won't stop an uncontrolled car, but it would've triggered AEB, which would've reduced the speed of the crash and increased his chance of survival.

GRA said:
Oils4AsphaultOnly said:
The Timothy Lee report was for A/P, not NoA (Navigate on Autopilot) - which would've made the decision of which "lane" to stay in. It's NoA that would've saved Walter Huang's life.
Assuming it worked correctly, and there's plenty of reports of that not being the case. Let's take a look at how well it worked when introduced 3 years after A/P was first available to the public, by a long-time TMC poster and multiple Tesla owner who's far more aware of the capabilities and limitations of A/P etc. than the typical buyer - this is from last November: https://teslamotorsclub.com/tmc/threads/navigate-on-autopilot-is-useless-2018-42-3.134137/

TLDR version: This is the most useless thing I've ever seen. I've seen some whoppers, but this takes the cake.

Let's do a rundown of what I think was improved:

Autosteer in highway interchanges and off-ramps was improved. It would stay in the ramp without too much trouble, while prior it would freak out and demand the driver intervene for sharper curves. (We'll ignore that it was taking the turns at ~15 MPH lower than the suggested speeds, but baby steps I suppose).

Visual indication of what travel lane was needed for upcoming interchanges was reasonable and a good addition to normal navigation.
I do like the path visualization when lane changes are initiated.

It does usually try to take exits without intervention (more on this later) which is a step in the right direction for on-ramp to off-ramp autopilot.

So, some improvements I suppose.

Now for the bad.

I now fully understand why Tesla makes it require confirmation. If it had been allowed to make the suggested lane changes on its own without confirmation, I'd likely have died 10-20x if I didn't take control every time.

AP1 and AP2 previously did *okay* when following a lane that ended and gradually merged into a single lane. While using NavOnAP this weekend, the car just wanted to make its own lane instead every time instead of merging... usually trying to run into a barrier or median, requiring intervention every time.

The car regularly suggested lane changes directly into objects it clearly detected. It would even show the proposed path on the visualization as going directly through the other vehicle. In one instance I wondered if it really was going to let me change lanes into a semi truck, or if it would wait until it was clear. Nope, it started to move right towards it after confirmation. No red lane, nothing, while directly along side a semi. *shakes head*

NavOnAP has no concept of "Keep Right, Pass Left". It never suggests lane changes back to the right in any of the available modes.
Further, it randomly suggests lane changes to the left for no reason whatsoever. No traffic, no interchanges, nothing.

I found the car randomly decelerating at least 10x during the trip with no obvious cause. More common when driving in the right lane vs left. It would also set a seemingly random max speed at times, with no speed limit changes or interchanges.

AP2 still doesn't read speed limit signs, so the noted speed limit doesn't always match the real highway speed limit in areas where it was recently upped or lowered (happens a lot around here with places bumping to 70).

At least once the car detected a construction zone with a popup about it (kudos on that) and then immediately proceeded to try and suggest a lane change into construction cones..... which negates this from making the "improvements" list above.

Overtake suggestions are useless. On two lanes, driving in the right lane, I would approach a vehicle ahead that was traveling more slowly. No other traffic. The car would decelerate... 5.... 10.... 15 MPH.... as it sees the vehicle. Then, after matching its speed at my set following distance, a few seconds later it'd popup "Confirm lane change" to overtake. Seriously, wtf. And not just once in a while. Every single time I waited for the suggested change, it behaved this way. In every mode setting, including "Mad Max".

The car detects the other vehicle way in advance, even when just using the in-car visualization for reference, and could easily make the suggested lane change early enough so that no deceleration at all would be needed, even with the delay of requiring confirmation.

On multiple occasions the car would start doing a lane change (either a confirmed one, a manually initiated one, or an automatic one for an exit), get part way through, and quickly veer back into the starting lane for no reason. About half of those times it would popup with "Lane change cancelled". In one instance I actually missed an exit because it was 2/3's into the exit ramp lane, stayed there a moment, then just jumped back to the left for no reason.... ugh.

Even features that were usable before, like manually initiated auto lane changes, are no longer reliable.

Overall, using "Navigate on Autopilot" did not improve the experience of using Autopilot at all, with the limited exception of autosteer's new ability to mostly keep in lane on a tight interchange... with that being negated by the fact that it tries to kill you any time a lane ends. Also, it seems that the ability to take tight interchanges is mostly thanks to nav fusion, as the vision model does not appear to be properly detecting lanes in some of these situations, yet the system presses onward.

The suggested lane changes were completely useless on every mode. It would either suggest changes that weren't necessary, weren't safe, or weren't useful. It was even suggesting lane changes for an interchange upwards of 8 miles away at one point, then refusing to suggest overtake lane changes until after that interchange.

Some more notes:

Vehicle detection to the sides and behind your vehicle is complete garbage.

This is super obvious when sitting still with other still vehicles all around. You'll seem them "swimming" around the visualization, colliding with each other, with you, etc.

Also obvious when overtaking large vehicles. Almost every single semi truck, bus, or RV I passed ended up with a twin ghost visual on the screen.

Finally, vehicles to the side are regularly shown overlapping my own vehicle visual, despite them being firmly in their own lane.

Vehicles behind your vehicle are actually detected only part of the time, apparently due to some issue with the rear cam setup in the hardware (@verygreen I believe has documented this).

It seems very obvious that Tesla has no real data fusion whatsoever between the cameras. This results in both huge gaps in the usable data as well duplicate data (like the ghost trucks). This is like computer vision 101 stuff that I don't understand why Tesla hasn't overcome this, especially in something shipped to thousands of customers.

Radar/vision fusion on AP2 appears to be significantly worse than AP1, with AP1 easily accurate for a few cm... AP2 easily worse than +/- 1m... very obvious when looking at the lead vehicle visualization.

Some of the failings of NavOnAP don't even make sense. If it clearly "sees" a vehicle, it seems like a basic sanity check in the higher level code would prevent it from suggesting a lane change into it.... but this isn't what happens.

Could probably go on for quite a while, but suffice it to say I won't be using the feature any further... not at least until it's actually useful.

It doesn't improve the experience of using autopilot for me one bit. In fact, it makes it even more frustrating. This is ignoring the super frequent nags that plague the more recent firmwares, too.

I'll be sticking to my AP1 vehicles for longer trips from now on I think. In fact, I'm probably going to try and make time to make some videos/posts about AP1/AP2 modifications that are actually useful.

For example, my modded AP1 vehicle would handle the situation I noted above (overtaking a vehicle) smoothly with zero deceleration. AP1 (and AP2) can detect a vehicle ahead of you over 100m away... no excuse for the behavior of NavOnAP.

I'm just super disappointed in Tesla. Their spat with Mobileye has cost Tesla customers a huge amount of progress on the autopilot front. AP1 owners are completely screwed because they will get zero improvements. (Despite promises of ongoing improvements, AP1 hasn't had a single improvement in about two years). Meanwhile, AP1 is running on Mobileye hardware that was released nearly 5 years ago and still handles many situations better than AP2. . . .
To you, does this represent an adequate level of development testing before release to the public, when failure can mean injury or death? If so, I guess you'd be fine flying on the 737 Max, pre-fix. It sure as hell doesn't to me. You can find similar posts on TMC this week pointing to various A/P/NoA safety shortcomings still, and all this over 3.5 years after A/P was first put into the public's hands. and none of this gets around the issue of driver disengagement, which is inevitable (see below). Here's what Waymo wrote at the time A/P was introduced, 3 years or so after stopping their own driving assistance program (it was already working better and had more safeguards than A/P when the latter was introduced):
Why we’re aiming for fully self-driving vehicles
https://medium.com/waymo/why-were-aiming-for-fully-self-driving-vehicles-c8d4d6e227e1

Note that the volunteers said they were less stressed and more rested, just as you have said you are. But they also disengaged from driving even though they were told they must not for safety reasons, and signed a form promising to pay attention.

Here's the NHTSA study referred to in the Waymo article (the one where it says that the mean time for drivers to regain control of the L2 vehicle was 17 seconds, i.e. Brown, Huang and Brenner would be, like Francisco Franco, still dead):
Human Factors Evaluation of
Level 2 and Level 3 Automated
Driving Concepts
https://www.nhtsa.gov/sites/nhtsa.d...umanfactorseval-l2l3-automdrivingconcepts.pdf

From the summary:
. . . Overall, participants greatly trusted
the capabilities of the automated systems. Although this trust is essential for widespread
adoption, participants were also observed prioritizing non-driving activities over the operation of
the vehicle
and disregarding TORs when they were presented. . . .

I've been using A/P 2 for a year, and NoA for over a month. I don't have the A/P 1 reference, but I am happy with A/P 2. I know that some of the issues the other driver (perhaps she was part of the early access program?) encountered has surely been fixed by the time it got to me, because NoA waited and continued signaling in my case until the lane was clear.

You keep harping about the risks, but fail to acknowledge that as it is now, A/P has saved the lives of some of those drunk/sleeping drivers. The accident rate of drivers who fall asleep must be pretty close to 100%, with death rates being a portion of that. The unassisted drunk/sleeping drivers are also most likely to involve some innocent 3rd party. The system is saving lives, DESPITE being abused.

I'm not going to discuss the difference between waymo's method and Tesla's method of gathering training data, because that's another can of worms that we'll probably spend pages on. Just going to say that the results of one system does not translate to the other.

The only thing we agree on is that the data on accident/death rates is important. I've already pointed out that Tesla's data on number of accidents for A/P driven versus non-A/P (but still tesla cars) driven miles is especially significant, because it's self-consistent. Perhaps a request for their report to distinguish between highway and non-highway non-A/P accidents per miles driven to make the comparison more direct? Any comparisons with NHTSA statistics should be done strictly with NHTSA data, otherwise, they're not comparable.
 
Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
In both the Brown and Brenner case, if either the driver paid attention or the truck had a side skirt, neither would be dead.
A side skirt isn't going to stop a 4,800 lb. car traveling at highway speed, or much else for that matter. They're aero mods, not underun protection. Side underrun guards are designed to stop pedestrians, bikes and motorcyclists, not cars. And of course, even if there were underrun guards that could have stopped either car, the drivers would almost certainly be dead anyway; NHTSA frontal crash test is at 35 mph, and the IIHS uses 40. Brown was doing 74 mph, and Brenner 68. Drivers not paying attention is exactly the problem (see below).
The side skirt won't stop an uncontrolled car, but it would've triggered AEB, which would've reduced the speed of the crash and increased his chance of survival.
No, it wouldn't. The problem isn't the lack of a radar-significant target (the flat side of a broadside-on trailer is about as radar-significant as it gets), it's that current AEBs (not just Tesla's) aren't able to correctly characterize a non-moving or zero-doppler target (like a crossing vehicle) as threats. To date, all four fatal A/P crashes (3 in the U.S., one in China) as well as several other A/P accidents (I think the total's currently five firetrucks, plus the road sweeper in China) involved AEB's failure to recognize stationary (such as the gore barrier in the Huang case, or stopped vehicles) or crossing targets and respond, i.e.

https://www.youtube.com/watch?v=fc0yYJ8-Dyo

From the NTSB report on the Brown crash:
Current Level 2 vehicle automation technologies cannot reliably identify and respond to crossing vehicle traffic. NHTSA’s ODI report on the Tesla Models S and X, which was prompted by the Williston crash, states: “None of the companies contacted by ODI indicated that AEB systems used in their products through MY 2016 production were designed to brake for crossing path collisions” (NHTSA 2017, p. 3). As part of its defect investigation, NHTSA conducted a series of test-track-based AEB evaluations on the Tesla Model S, as well as a peer vehicle system. 55 The testing confirmed that the Tesla AEB system avoided crashes for the majority of rear-end scenarios, and its TACC generally provided enough braking to avoid rear-end crash scenarios; but neither test vehicle effectively responded to “target vehicles” in straight crossing path or left turn across path scenarios.
The Brenner crash shows that Tesla's AEB still can't do so.

<Snip detailed account of numerous NoA safety deficiencies>

Oils4AsphaultOnly said:
GRA said:
To you, does this represent an adequate level of development testing before release to the public, when failure can mean injury or death? If so, I guess you'd be fine flying on the 737 Max, pre-fix. It sure as hell doesn't to me. You can find similar posts on TMC this week pointing to various A/P/NoA safety shortcomings still, and all this over 3.5 years after A/P was first put into the public's hands. and none of this gets around the issue of driver disengagement, which is inevitable (see below). Here's what Waymo wrote at the time A/P was introduced, 3 years or so after stopping their own driving assistance program (it was already working better and had more safeguards than A/P when the latter was introduced):
Why we’re aiming for fully self-driving vehicles
https://medium.com/waymo/why-were-aiming-for-fully-self-driving-vehicles-c8d4d6e227e1

Note that the volunteers said they were less stressed and more rested, just as you have said you are. But they also disengaged from driving even though they were told they must not for safety reasons, and signed a form promising to pay attention.

Here's the NHTSA study referred to in the Waymo article (the one where it says that the mean time for drivers to regain control of the L2 vehicle was 17 seconds, i.e. Brown, Huang and Brenner would be, like Francisco Franco, still dead):
Human Factors Evaluation of
Level 2 and Level 3 Automated
Driving Concepts
https://www.nhtsa.gov/sites/nhtsa.d...umanfactorseval-l2l3-automdrivingconcepts.pdf

From the summary:
. . . Overall, participants greatly trusted
the capabilities of the automated systems. Although this trust is essential for widespread
adoption, participants were also observed prioritizing non-driving activities over the operation of
the vehicle
and disregarding TORs when they were presented. . . .
I've been using A/P 2 for a year, and NoA for over a month. I don't have the A/P 1 reference, but I am happy with A/P 2. I know that some of the issues the other driver (perhaps she was part of the early access program?) encountered has surely been fixed by the time it got to me, because NoA waited and continued signaling in my case until the lane was clear.
Nope, regular albeit very well informed and technically competent customer. You didn't answer my question, does this represent an acceptable level of pre-public release development testing to you?

Oils4AsphaultOnly said:
You keep harping about the risks, but fail to acknowledge that as it is now, A/P has saved the lives of some of those drunk/sleeping drivers. The accident rate of drivers who fall asleep must be pretty close to 100%, with death rates being a portion of that. The unassisted drunk/sleeping drivers are also most likely to involve some innocent 3rd party. The system is saving lives, DESPITE being abused.
To repeat, if your claim is correct Tesla will have no problem turning the data and their methodology over for an independent review. Again, it's up to them to prove their claims, and if the data is so conclusive there's every advantage for them in doing it. Instead, they've resisted all calls to do so. BTW, for all we know the reason some of those drunk/sleeping drivers decided to drive anyway was because they thought A/P would cover them. Without interviewing them, we just don't know if that was a factor in their decision or not.

Oils4AsphaultOnly said:
I'm not going to discuss the difference between waymo's method and Tesla's method of gathering training data, because that's another can of worms that we'll probably spend pages on. Just going to say that the results of one system does not translate to the other.
Human behavior does directly translate, and exactly the same human behavior was recorded in Google's tests as has been recorded in numerous internet videos of Tesla owners, even ignoring the more extreme stupid human tricks (having sex, riding in the back seat with no one in front, sleeping). Or are you saying that Tesla only sells cars to superior humans, all evidence to the contrary? In both this and the "Autonomous Vehicles, LEAF and Others" topics, I've posted numerous links over the years to scientifically-conducted studies, dating from 30+ years back to currently, on human behavior when dealing with automated control systems. While they may vary somewhat in their methodologies and scopes, every single study, bar none, has shown that

1. Most human operators will trust autonomous systems well before they have achieved sufficient capability and reliability to be safer than humans,

2. As a result of the above, they will allow themselves to be distracted and will mentally and physically disengage, and

3. To resume control after such disengagement and take the correct action often requires a prolonged period (of Observation, Orientation, Decision and Action, the OODA loop) of many seconds, which is far too long in an emergency situation.

I know you consider yourself one of the people who never lets themselves get distracted while using A/P or a similar self-driving system, and maybe you actually are such a person (there are a few), although self-assessments tend to rate one's capabilities far higher than is justified - in numerous surveys 70+% of drivers rated their driving skills as "Above Average or better"? See "Illusory superiority". Aside from the statistical impossibility, at least one such group was surveyed because all of them had just been found at-fault in an accident. So, even if YOU are superior, most of the people using these systems aren't, and will act as indicated in points 1-3 above. Feel free to provide a link to a peer-reviewed study that found otherwise.

Oils4AsphaultOnly said:
The only thing we agree on is that the data on accident/death rates is important. I've already pointed out that Tesla's data on number of accidents for A/P driven versus non-A/P (but still tesla cars) driven miles is especially significant, because it's self-consistent. Perhaps a request for their report to distinguish between highway and non-highway non-A/P accidents per miles driven to make the comparison more direct? Any comparisons with NHTSA statistics should be done strictly with NHTSA data, otherwise, they're not comparable.
Again, Tesla needs to prove their claims by providing the data and methodology to others, because they aren't a disinterested party. NHTSA's FARS database tracks all fatal accidents: https://www.nhtsa.gov/research-data/fatality-analysis-reporting-system-fars

Tesla's got as much reason to claim that A/P is safer as VW did to claim that their diesels met emission tests, so Tesla needs to put up or shut up. If Tesla continues to fight the lawsuit by Huang's family instead of settling it out of court, the info may get released despite them, because in pre-trial discovery the plaintiffs will likely (and should) insist on getting that info, given Tesla's public claims.
 
GRA said:
Nope, regular albeit very well informed and technically competent customer. You didn't answer my question, does this represent an acceptable level of pre-public release development testing to you?

Yes to your question, as written.

I'll agree that it's not refined enough for the general public yet (which is what I think you intended to ask), but I'll address the "but" about this answer below.

GRA said:
Oils4AsphaultOnly said:
You keep harping about the risks, but fail to acknowledge that as it is now, A/P has saved the lives of some of those drunk/sleeping drivers. The accident rate of drivers who fall asleep must be pretty close to 100%, with death rates being a portion of that. The unassisted drunk/sleeping drivers are also most likely to involve some innocent 3rd party. The system is saving lives, DESPITE being abused.
To repeat, if your claim is correct Tesla will have no problem turning the data and their methodology over for an independent review. Again, it's up to them to prove their claims, and if the data is so conclusive there's every advantage for them in doing it. Instead, they've resisted all calls to do so. BTW, for all we know the reason some of those drunk/sleeping drivers decided to drive anyway was because they thought A/P would cover them. Without interviewing them, we just don't know if that was a factor in their decision or not.

There are at least 2 other reasons why Tesla could have good data and still not be able to share it. Access to the raw data might involve a significant breach of network security protocols. And/or extraction/reproduction of that data is too expensive (includes staff resources) to justify an effort that might not satisfy all the critics.

GRA said:
Oils4AsphaultOnly said:
I'm not going to discuss the difference between waymo's method and Tesla's method of gathering training data, because that's another can of worms that we'll probably spend pages on. Just going to say that the results of one system does not translate to the other.
Human behavior does directly translate, and exactly the same human behavior was recorded in Google's tests as has been recorded in numerous internet videos of Tesla owners, even ignoring the more extreme stupid human tricks (having sex, riding in the back seat with no one in front, sleeping). Or are you saying that Tesla only sells cars to superior humans, all evidence to the contrary? In both this and the "Autonomous Vehicles, LEAF and Others" topics, I've posted numerous links over the years to scientifically-conducted studies, dating from 30+ years back to currently, on human behavior when dealing with automated control systems. While they may vary somewhat in their methodologies and scopes, every single study, bar none, has shown that

1. Most human operators will trust autonomous systems well before they have achieved sufficient capability and reliability to be safer than humans,

2. As a result of the above, they will allow themselves to be distracted and will mentally and physically disengage, and

3. To resume control after such disengagement and take the correct action often requires a prolonged period (of Observation, Orientation, Decision and Action, the OODA loop) of many seconds, which is far too long in an emergency situation.

I know you consider yourself one of the people who never lets themselves get distracted while using A/P or a similar self-driving system, and maybe you actually are such a person (there are a few), although self-assessments tend to rate one's capabilities far higher than is justified - in numerous surveys 70+% of drivers rated their driving skills as "Above Average or better"? See "Illusory superiority". Aside from the statistical impossibility, at least one such group was surveyed because all of them had just been found at-fault in an accident. So, even if YOU are superior, most of the people using these systems aren't, and will act as indicated in points 1-3 above. Feel free to provide a link to a peer-reviewed study that found otherwise.

I don't have any counter studies, because I don't disagree with their initial assessment. However, there's a flaw in citing them that you haven't looked into. It's much like the Stanley Milgram study on obidience. People only remember and focus on the "65% of participants obeyed orders to inflict harm to others" generalization, when the actual study details showed differently. Also, he did other studies that showed the devil was in the details (e.g. having two "teachers" reduced compliance to well under 10%).

In those studies on overly dependence on automation, how many of the participants behaved differently after being shown that their initial behavior was wrong? Basically, did those participants who over-relied on the tech change their behavior a second time around? Knowing human nature, there would definitely be some recidivism, but would more correct themselves?

Joshua Brown opened many people's eyes to be more careful (there were many videos of people abusing A/P without harm and that lead to complacency and ultimately Joshua Brown). Walter Huang was roughly around the same time, and then nothing for a while. Then drivers got complacent again, and Jeremy Banner happened, which should serve as a repeat reminder against complacency. I have no misconceptions that people won't become complacent again, but the trade-off is that during these years, there have been many lives saved. Yes, I know you want the data to prove this (and you know I only have a count of how many videos and arrests of people asleep/drunk at the wheel). I just want to spell out what I'm claiming and how your studies don't refute it. Also, the chilling effect that you're so worried about (but hasn't happened) is why I'm pointing out that A/P is ADAS. People are _correctly_ assigning responsibility to the drivers and not A/P.

GRA said:
Oils4AsphaultOnly said:
The only thing we agree on is that the data on accident/death rates is important. I've already pointed out that Tesla's data on number of accidents for A/P driven versus non-A/P (but still tesla cars) driven miles is especially significant, because it's self-consistent. Perhaps a request for their report to distinguish between highway and non-highway non-A/P accidents per miles driven to make the comparison more direct? Any comparisons with NHTSA statistics should be done strictly with NHTSA data, otherwise, they're not comparable.
Again, Tesla needs to prove their claims by providing the data and methodology to others, because they aren't a disinterested party. NHTSA's FARS database tracks all fatal accidents: https://www.nhtsa.gov/research-data/fatality-analysis-reporting-system-fars

Tesla's got as much reason to claim that A/P is safer as VW did to claim that their diesels met emission tests, so Tesla needs to put up or shut up. If Tesla continues to fight the lawsuit by Huang's family instead of settling it out of court, the info may get released despite them, because in pre-trial discovery the plaintiffs will likely (and should) insist on getting that info, given Tesla's public claims.

So what does the FARS database show? I don't have a desktop available to analyze the compressed 14MB csv file.
 
I'm a bit pushed for time this week, so I'll have to put off my reply for a while. I'll get to it when I can. In the meantime, you can ponder this while hopefully reconsidering your answer to my question re what constitutes adequate pre-public deployment development testing:
Tesla's Navigate on Autopilot Requires Significant Driver Intervention
CR finds that latest version of Tesla's automatic lane-changing feature is far less competent than a human driver
https://www.consumerreports.org/aut...nge-requires-significant-driver-intervention/

. . . In practice, we found that Navigate on Autopilot lagged far behind a human driver’s skill set: The feature cut off cars without leaving enough space and even passed other cars in ways that violate state laws, according to several law enforcement representatives CR interviewed for this report. As a result, the driver often had to prevent the system from making poor decisions.

“The system’s role should be to help the driver, but the way this technology is deployed, it’s the other way around,” says Jake Fisher, Consumer Reports’ senior director of auto testing. “It’s incredibly nearsighted. It doesn’t appear to react to brake lights or turn signals, it can’t anticipate what other drivers will do, and as a result, you constantly have to be one step ahead of it. . . .”

In the first version of Navigate on Autopilot, Tesla said the software could guide a car through highway interchanges and exits and that it could make a lane change if the driver confirmed it by using the turn signal or accepting an onscreen prompt. When CR first tested this version of the Navigate feature in November, we found that it lagged behind a human driver’s abilities in more complex driving scenarios despite Tesla’s claim that it would make driving “more relaxing, enjoyable and fun. . . ."

In early May, our Model 3 received a software update that allowed Navigate on Autopilot to make automatic lane changes without requiring driver confirmation. We enabled the feature and drove on several highways across Connecticut. In the process, multiple testers reported that the Tesla often changed lanes in ways that a safe human driver would not—cutting too closely in front of other cars, and passing on the right.

One area of particular concern is Tesla’s claims that the vehicle’s three rearward-facing cameras can detect fast-approaching objects from the rear better than the average driver can. Our testers found the opposite to be true in practice.

“The system has trouble responding to vehicles that approach quickly from behind,” Fisher says. “Because of this, the system will often cut off a vehicle that is going a much faster speed since it doesn’t seem to sense the oncoming car until it’s relatively close.”

Fisher says merging into traffic is another problem. “It is reluctant to merge in heavy traffic, but when it does, it often immediately applies the brakes to create space behind the follow car—this can be a rude surprise to the vehicle you cut off.”

Our testers often canceled a pass that had been initiated by Autopilot—usually by applying steering force to move the car back into the travel lane—when they felt that the maneuver would be unsafe.

Ultimately, even in light traffic, our testers found that the system’s lack of situational awareness made driving less pleasant.

“In essence, the system does the easy stuff, but the human needs to intervene when things get more complicated,” Fisher says. . . .
 
Somebody just caccelled on me, so I'll get started on my reply.
Oils4AsphaultOnly said:
GRA said:
Nope, regular albeit very well informed and technically competent customer. You didn't answer my question, does this represent an acceptable level of pre-public release development testing to you?

Yes to your question, as written.

I'll agree that it's not refined enough for the general public yet (which is what I think you intended to ask), but I'll address the "but" about this answer below.
of course I mean "the general public" when I refer to "pre-public release"; I'm referring to regular customers, not company volunteers or even early release testers.

Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
You keep harping about the risks, but fail to acknowledge that as it is now, A/P has saved the lives of some of those drunk/sleeping drivers. The accident rate of drivers who fall asleep must be pretty close to 100%, with death rates being a portion of that. The unassisted drunk/sleeping drivers are also most likely to involve some innocent 3rd party. The system is saving lives, DESPITE being abused.
To repeat, if your claim is correct Tesla will have no problem turning the data and their methodology over for an independent review. Again, it's up to them to prove their claims, and if the data is so conclusive there's every advantage for them in doing it. Instead, they've resisted all calls to do so. BTW, for all we know the reason some of those drunk/sleeping drivers decided to drive anyway was because they thought A/P would cover them. Without interviewing them, we just don't know if that was a factor in their decision or not.
There are at least 2 other reasons why Tesla could have good data and still not be able to share it. Access to the raw data might involve a significant breach of network security protocols. And/or extraction/reproduction of that data is too expensive (includes staff resources) to justify an effort that might not satisfy all the critics.
If other companies are able to do it, then Tesla can (and should) be held to the same standard.

Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
I'm not going to discuss the difference between waymo's method and Tesla's method of gathering training data, because that's another can of worms that we'll probably spend pages on. Just going to say that the results of one system does not translate to the other.
Human behavior does directly translate, and exactly the same human behavior was recorded in Google's tests as has been recorded in numerous internet videos of Tesla owners, even ignoring the more extreme stupid human tricks (having sex, riding in the back seat with no one in front, sleeping). Or are you saying that Tesla only sells cars to superior humans, all evidence to the contrary? In both this and the "Autonomous Vehicles, LEAF and Others" topics, I've posted numerous links over the years to scientifically-conducted studies, dating from 30+ years back to currently, on human behavior when dealing with automated control systems. While they may vary somewhat in their methodologies and scopes, every single study, bar none, has shown that

1. Most human operators will trust autonomous systems well before they have achieved sufficient capability and reliability to be safer than humans,

2. As a result of the above, they will allow themselves to be distracted and will mentally and physically disengage, and

3. To resume control after such disengagement and take the correct action often requires a prolonged period (of Observation, Orientation, Decision and Action, the OODA loop) of many seconds, which is far too long in an emergency situation.

I know you consider yourself one of the people who never lets themselves get distracted while using A/P or a similar self-driving system, and maybe you actually are such a person (there are a few), although self-assessments tend to rate one's capabilities far higher than is justified - in numerous surveys 70+% of drivers rated their driving skills as "Above Average or better"? See "Illusory superiority". Aside from the statistical impossibility, at least one such group was surveyed because all of them had just been found at-fault in an accident. So, even if YOU are superior, most of the people using these systems aren't, and will act as indicated in points 1-3 above. Feel free to provide a link to a peer-reviewed study that found otherwise.
I don't have any counter studies, because I don't disagree with their initial assessment. However, there's a flaw in citing them that you haven't looked into. It's much like the Stanley Milgram study on obidience. People only remember and focus on the "65% of participants obeyed orders to inflict harm to others" generalization, when the actual study details showed differently. Also, he did other studies that showed the devil was in the details (e.g. having two "teachers" reduced compliance to well under 10%).

In those studies on overly dependence on automation, how many of the participants behaved differently after being shown that their initial behavior was wrong? Basically, did those participants who over-relied on the tech change their behavior a second time around? Knowing human nature, there would definitely be some recidivism, but would more correct themselves?
No idea, but for the sake of argument let's suppose that such 'correction' works. How do you propose to implement such a proposal among the general public? Are we going to install driver cameras in all cars, which are accessible to the company or government at any time? Who's going to pay to monitor those cameras for every single semi-autonomous car on the road any time the systems are turned on, to see if the driver is complying? How will it be paid for? And, given that the projections are that L4 or better autonomy will be achieved for initial deployment within the next five years or so with mass deployment within ten, is it worth developing this hugely invasive and very expensive system for such a short period of time? We'd first need to confirm that these systems aactually ARE safer than humans - see need for independent data/methodology review.
Oils4AsphaultOnly said:
Joshua Brown opened many people's eyes to be more careful (there were many videos of people abusing A/P without harm and that lead to complacency and ultimately Joshua Brown). Walter Huang was roughly around the same time, and then nothing for a while.
Then drivers got complacent again, and Jeremy Banner happened, which should serve as a repeat reminder against complacency.
Roughly around the same time? Brown, May 7th, 2016. Huang, March 23, 2018, 22+ months later. Brenner, March 1, 2019, 11+ months. Note, these are only the fatal accidents, and don't include the Tesla crashing into stopped firetruck during that almost 3 year period.

Oils4AsphaultOnly said:
I have no misconceptions that people won't become complacent again, but the trade-off is that during these years, there have been many lives saved. Yes, I know you want the data to prove this (and you know I only have a count of how many videos and arrests of people asleep/drunk at the wheel). I just want to spell out what I'm claiming and how your studies don't refute it. Also, the chilling effect that you're so worried about (but hasn't happened) is why I'm pointing out that A/P is ADAS. People are _correctly_ assigning responsibility to the drivers and not A/P.
If the ADAS is guaranteed to be abused by normal humans, and it is, and the manufacturer doesn't take steps to ensure that it can't be, then responsibility is shared. Seeing as how Teslas know which road they're on and the speed limit, a single line of code could have prevented both Brown's and Brenner's deaths, or at least, eliminated A/P's role in that death, by preventing A/P from being used a situation where Tesla knows that AEB doesn't work, which is dealing with cross traffic. Something like this is all it would take:

If ROADTYPE = "Freeway" then A/P = "OK" ELSE A/P = "Not OK"

This is what Cadillac does with Supercruise, so Tesla should take one of the programmers they have spending far more time writing cutesy Easter Eggs and put them to work making a simpl,e change that will save lives.

Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
The only thing we agree on is that the data on accident/death rates is important. I've already pointed out that Tesla's data on number of accidents for A/P driven versus non-A/P (but still tesla cars) driven miles is especially significant, because it's self-consistent. Perhaps a request for their report to distinguish between highway and non-highway non-A/P accidents per miles driven to make the comparison more direct? Any comparisons with NHTSA statistics should be done strictly with NHTSA data, otherwise, they're not comparable.
Again, Tesla needs to prove their claims by providing the data and methodology to others, because they aren't a disinterested party. NHTSA's FARS database tracks all fatal accidents: https://www.nhtsa.gov/research-data/fatality-analysis-reporting-system-fars

Tesla's got as much reason to claim that A/P is safer as VW did to claim that their diesels met emission tests, so Tesla needs to put up or shut up. If Tesla continues to fight the lawsuit by Huang's family instead of settling it out of court, the info may get released despite them, because in pre-trial discovery the plaintiffs will likely (and should) insist on getting that info, given Tesla's public claims.

So what does the FARS database show? I don't have a desktop available to analyze the compressed 14MB csv file.
Demographics, # of occupants, driver/pax/motorcyclist, non-motorist, conditions, speeds, type of accident etc.
 
GRA said:
Somebody just caccelled on me, so I'll get started on my reply.
Oils4AsphaultOnly said:
GRA said:
Nope, regular albeit very well informed and technically competent customer. You didn't answer my question, does this represent an acceptable level of pre-public release development testing to you?

Yes to your question, as written.

I'll agree that it's not refined enough for the general public yet (which is what I think you intended to ask), but I'll address the "but" about this answer below.
of course I mean "the general public" when I refer to "pre-public release"; I'm referring to regular customers, not company volunteers or even early release testers.

That's a very abusive use of "pre-public release" then, and I'll leave it at that. From your previous post, Consumer Reports seems to agree with you. I think you're both looking at NoA and A/P in the wrong role, but whatever, the method, despite the problems, will prove itself over time.

GRA said:
Oils4AsphaultOnly said:
GRA said:
To repeat, if your claim is correct Tesla will have no problem turning the data and their methodology over for an independent review. Again, it's up to them to prove their claims, and if the data is so conclusive there's every advantage for them in doing it. Instead, they've resisted all calls to do so. BTW, for all we know the reason some of those drunk/sleeping drivers decided to drive anyway was because they thought A/P would cover them. Without interviewing them, we just don't know if that was a factor in their decision or not.
There are at least 2 other reasons why Tesla could have good data and still not be able to share it. Access to the raw data might involve a significant breach of network security protocols. And/or extraction/reproduction of that data is too expensive (includes staff resources) to justify an effort that might not satisfy all the critics.
If other companies are able to do it, then Tesla can (and should) be held to the same standard.

Really? And which other companies have released their driver assistance accident data? We're talking non-FARS here right?

GRA said:
Oils4AsphaultOnly said:
GRA said:
Human behavior does directly translate, and exactly the same human behavior was recorded in Google's tests as has been recorded in numerous internet videos of Tesla owners, even ignoring the more extreme stupid human tricks (having sex, riding in the back seat with no one in front, sleeping). Or are you saying that Tesla only sells cars to superior humans, all evidence to the contrary? In both this and the "Autonomous Vehicles, LEAF and Others" topics, I've posted numerous links over the years to scientifically-conducted studies, dating from 30+ years back to currently, on human behavior when dealing with automated control systems. While they may vary somewhat in their methodologies and scopes, every single study, bar none, has shown that

1. Most human operators will trust autonomous systems well before they have achieved sufficient capability and reliability to be safer than humans,

2. As a result of the above, they will allow themselves to be distracted and will mentally and physically disengage, and

3. To resume control after such disengagement and take the correct action often requires a prolonged period (of Observation, Orientation, Decision and Action, the OODA loop) of many seconds, which is far too long in an emergency situation.

I know you consider yourself one of the people who never lets themselves get distracted while using A/P or a similar self-driving system, and maybe you actually are such a person (there are a few), although self-assessments tend to rate one's capabilities far higher than is justified - in numerous surveys 70+% of drivers rated their driving skills as "Above Average or better"? See "Illusory superiority". Aside from the statistical impossibility, at least one such group was surveyed because all of them had just been found at-fault in an accident. So, even if YOU are superior, most of the people using these systems aren't, and will act as indicated in points 1-3 above. Feel free to provide a link to a peer-reviewed study that found otherwise.
I don't have any counter studies, because I don't disagree with their initial assessment. However, there's a flaw in citing them that you haven't looked into. It's much like the Stanley Milgram study on obidience. People only remember and focus on the "65% of participants obeyed orders to inflict harm to others" generalization, when the actual study details showed differently. Also, he did other studies that showed the devil was in the details (e.g. having two "teachers" reduced compliance to well under 10%).

In those studies on overly dependence on automation, how many of the participants behaved differently after being shown that their initial behavior was wrong? Basically, did those participants who over-relied on the tech change their behavior a second time around? Knowing human nature, there would definitely be some recidivism, but would more correct themselves?
No idea, but for the sake of argument let's suppose that such 'correction' works. How do you propose to implement such a proposal among the general public? Are we going to install driver cameras in all cars, which are accessible to the company or government at any time? Who's going to pay to monitor those cameras for every single semi-autonomous car on the road any time the systems are turned on, to see if the driver is complying? How will it be paid for? And, given that the projections are that L4 or better autonomy will be achieved for initial deployment within the next five years or so with mass deployment within ten, is it worth developing this hugely invasive and very expensive system for such a short period of time? We'd first need to confirm that these systems aactually ARE safer than humans - see need for independent data/methodology review.
Oils4AsphaultOnly said:
Joshua Brown opened many people's eyes to be more careful (there were many videos of people abusing A/P without harm and that lead to complacency and ultimately Joshua Brown). Walter Huang was roughly around the same time, and then nothing for a while.
Then drivers got complacent again, and Jeremy Banner happened, which should serve as a repeat reminder against complacency.
Roughly around the same time? Brown, May 7th, 2016. Huang, March 23, 2018, 22+ months later. Brenner, March 1, 2019, 11+ months. Note, these are only the fatal accidents, and don't include the Tesla crashing into stopped firetruck during that almost 3 year period.

I stand corrected about Walter Huang's timeline. So it seems about once a year is the reminder interval. I'm being calloused, but that's what it takes to correct human nature.

GRA said:
Oils4AsphaultOnly said:
I have no misconceptions that people won't become complacent again, but the trade-off is that during these years, there have been many lives saved. Yes, I know you want the data to prove this (and you know I only have a count of how many videos and arrests of people asleep/drunk at the wheel). I just want to spell out what I'm claiming and how your studies don't refute it. Also, the chilling effect that you're so worried about (but hasn't happened) is why I'm pointing out that A/P is ADAS. People are _correctly_ assigning responsibility to the drivers and not A/P.
If the ADAS is guaranteed to be abused by normal humans, and it is, and the manufacturer doesn't take steps to ensure that it can't be, then responsibility is shared. Seeing as how Teslas know which road they're on and the speed limit, a single line of code could have prevented both Brown's and Brenner's deaths, or at least, eliminated A/P's role in that death, by preventing A/P from being used a situation where Tesla knows that AEB doesn't work, which is dealing with cross traffic. Something like this is all it would take:

If ROADTYPE = "Freeway" then A/P = "OK" ELSE A/P = "Not OK"

This is what Cadillac does with Supercruise, so Tesla should take one of the programmers they have spending far more time writing cutesy Easter Eggs and put them to work making a simpl,e change that will save lives.

You know nothing about programming if you think it's really that simple. And you know even less about neural nets if you expect the programmers to interject their code like that. If Cadillac's engineers really wrote code like that, then they'll never get to any level of self-driving. Considering that they at least made it to level 2+, there's still hope for them.

GRA said:
Oils4AsphaultOnly said:
GRA said:
Again, Tesla needs to prove their claims by providing the data and methodology to others, because they aren't a disinterested party. NHTSA's FARS database tracks all fatal accidents: https://www.nhtsa.gov/research-data/fatality-analysis-reporting-system-fars

Tesla's got as much reason to claim that A/P is safer as VW did to claim that their diesels met emission tests, so Tesla needs to put up or shut up. If Tesla continues to fight the lawsuit by Huang's family instead of settling it out of court, the info may get released despite them, because in pre-trial discovery the plaintiffs will likely (and should) insist on getting that info, given Tesla's public claims.

So what does the FARS database show? I don't have a desktop available to analyze the compressed 14MB csv file.
Demographics, # of occupants, driver/pax/motorcyclist, non-motorist, conditions, speeds, type of accident etc.

Not what I mean, but I guess you're not able to analyze the data yourself huh? Depending on other people to draw the conclusions? I'll dig into it later and see if at least make/model info is buried in there.
 
GRA said:
I'm a bit pushed for time this week, so I'll have to put off my reply for a while. I'll get to it when I can. In the meantime, you can ponder this while hopefully reconsidering your answer to my question re what constitutes adequate pre-public deployment development testing:
Tesla's Navigate on Autopilot Requires Significant Driver Intervention
CR finds that latest version of Tesla's automatic lane-changing feature is far less competent than a human driver
For those who don't follow along on TMC (I take hiatuses sometimes for months at a time), CR's findings (https://www.consumerreports.org/autonomous-driving/tesla-navigate-on-autopilot-automatic-lane-change-requires-significant-driver-intervention/) aren't really new. There are tons of threads and posts on TMC about how poorly NoA works, in general. It's been going on ever since the feature was rolled out.

And, IIRC, there are definitely some software releases that are worse than others from an AP and NoA POV. Latest isn't always the greatest.
 
From my informal polling at work, most people don't think much about NoA, but they are more concerned (pissed off is a more accurate description) about sudden unexpected and unexplained braking. Still an issue in latest release. Hopefully something easy to fix.
 
Back
Top