Tesla's autopilot, on the road

My Nissan Leaf Forum

Help Support My Nissan Leaf Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Have to refer back to this A/P scenario:

https://youtu.be/YUnRTNdxMGk

How difficult is/was it for Tesla A/P system designers to write a OTA patch to avoid that? Surely the A/P knows when it enters an intersection
and should have the ability to differentiate between a spaced double-yellow and a single spaced white line. You would have thought
the A/P would have captured multiple such images over the many years the A/P has been in the on-the-road development process.
Hopefully, like the semi trailer repeat, this system failure doesn't re-occur and result in an accident/death the next time. Furthermore,
how does a QC department allow such a marginal product get released to production? By the way, does Tesla even have a QC department?

Totally incredible! Would Elon use this appropriate nomenclature, "a total FU"?
 
GRA said:
Oils4AsphaultOnly said:
GRA said:
From an article on GCR dated May 24th: https://www2.greencarreports.com/ne...t-drives-itself-poorly-consumer-reports-finds

The article goes on to quote David Friedman about Tesla's pre-public release testing regimen:
IOW, pretty much what the FAA failed to ensure Boeing did adequately in the case of the 737 Max, the difference being that in an airliner accident people die by the hundreds, while for cars the total per individual accident is much smaller, but the number of accidents is far greater.
Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.

You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a24511826/safety-features-automatic-braking-system-tested-explained/

It has nothing to do with threat detection of a crossing vehicle (requires path prediction).

A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.

And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).

The driver versus pilot training analogy isn't even applicable, since sleeping at the wheel isn't a training issue.

GRA said:
Oils4AsphaultOnly said:
GRA said:
For an example of exactly the opposite approach to development testing compared to Tesla, and one which I obviously believe is necessary, see the following article. BTW, in a previous post you stated that there hadn't been any backlash owing to self-driving car accidents. I meant to reply at the time, but got distracted. In fact, as noted below there was a major backlash after the Herzberg death, and those where self-driving vehicles kill non-occupants are the ones that I'm worried will set back the development and deployment of AVs. The general public is far more worried about being put at risk by self-driving cars that they aren't in. Anyone who's riding in one has volunteered to act as a crash-test dummy for the company, so people aren't as concerned about those deaths as they are when an AV kills a non-occupant, potentially themselves: https://www.forbes.com/sites/alanoh...low-ride-to-self-driving-future/#3e6c74e11124
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.

Tesla had its first fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.

Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.

Waymo hasn't killed anyone, because it hasn't driven fast enough to do so. At 35mph, any non-pedestrian accidents would be non-fatal. Granted they've tackled the more difficult task of street driving, but their accident stats aren't directly comparable to Tesla's. I only brought them up to highlight the difference in scale of where their systems can be applied.

GRA said:
Oils4AsphaultOnly said:
One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."

Funny. I wrote that to mean Tesla's method of iterating improvements and functionality into A/P, then NoA, and eventually FSD. You read it to mean Waymo's method of iterating from one geo-fenced city at a time.

Which just brings us all back to my old point of speed of deployment. Waymo's method would take YEARS (if not decades) to successfully deploy, and during that time, thousands of lives will be lost that could've been saved with a method that reaches FSD faster. At least 3 lives have been saved (all those DUI arrests) due to A/P so far, not counting any unreported ones where the driver made it home without being arrested. Eventually, you'll see things my way, you just don't know it yet. ;-)
 
Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.

You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a24511826/safety-features-automatic-braking-system-tested-explained/

It has nothing to do with threat detection of a crossing vehicle (requires path prediction).

A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.

And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).

The driver versus pilot training analogy isn't even applicable, since sleeping at the wheel isn't a training issue.

GRA said:
Oils4AsphaultOnly said:
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.

Tesla had its first fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.

Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.

Waymo hasn't killed anyone, because it hasn't driven fast enough to do so. At 35mph, any non-pedestrian accidents would be non-fatal. Granted they've tackled the more difficult task of street driving, but their accident stats aren't directly comparable to Tesla's. I only brought them up to highlight the difference in scale of where their systems can be applied.

GRA said:
Oils4AsphaultOnly said:
One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."

Funny. I wrote that to mean Tesla's method of iterating improvements and functionality into A/P, then NoA, and eventually FSD. You read it to mean Waymo's method of iterating from one geo-fenced city at a time.

Which just brings us all back to my old point of speed of deployment. Waymo's method would take YEARS (if not decades) to successfully deploy, and during that time, thousands of lives will be lost that could've been saved with a method that reaches FSD faster. At least 3 lives have been saved (all those DUI arrests) due to A/P so far, not counting any unreported ones where the driver made it home without being arrested. Eventually, you'll see things my way, you just don't know it yet. ;-)

Your and GRA's discussions about A/P statistics reach the ad nauseam level like over on the Toyota Mirai FCEV thread.
 
lorenfb said:
Your and GRA's discussions about A/P statistics reach the ad nauseam level like over on the Toyota Mirai FCEV thread.


per xkcd: https://xkcd.com/386/
duty_calls.png
 
Oils4AsphaultOnly said:
GRA said:
International Business Times:
Tesla Autopilot Safety Issues Continue As EV Slams Into Another Car
https://www.ibtimes.com/tesla-autopilot-safety-issues-continue-ev-slams-another-car-2795153

Stopped car on highway in lane, other car swerved into then out of lane, so known problem, but one we'll see occur increasingly often as the number of Teslas on the road increase. From that article there was also this which I hadn't heard about, but which we can expect to see more and more of if Tesla doesn't dial it back:
. . . In fact, Tesla recently agreed on a $13 million settlement with a former employee who was struck by the Model S while working. . . .

A more complete analysis of the accident in Norway is available in the original Forbes article, in which the Tesla owner credits A/P with saving his life (which may or may not be true, as the article's author points out):
May 26, 2019, 11:28am
Tesla On Autopilot Slams Into Stalled Car On Highway, Expect More Of This
https://www.forbes.com/sites/lancee...-on-highway-expect-more-of-this/#29c07bdc4fe5

As I've told lorenfb, be careful about the FUD you read.

The $13 million lawsuit had nothing to do with A/P nor Tesla, other than it was a car driven by a Tesla contractor on Tesla's property: https://laist.com/2019/05/15/13_million_settlement_tesla_fremont_factory.php
Okay, thanks. I was wondering why I hadn't heard of it until now.

Oils4AsphaultOnly said:
As for the rate of A/P accidents, my claim on complacency re-curring seems to be bearing out. It's been 1 year since the last crash into a stalled vehicle, even though the number of Tesla autopilot capable vehicles have doubled.
Considerably less, actually. Prior to this one, the most recent I could find was last August 25th. I've been unable to confirm whether A/P was on or not in that one - the driver said he thought it was, but as he was arrested for DUI (see our previous discussion about whether or not people may be choosing to drive drunk because they have A/P), that might just be an excuse. There were at least three such crashes into stopped firetrucks where A/P was claimed to have been in use reported last year in the U.S., that one (in San Jose) plus one each in January (L.A.) and May (SLC, UT). See
WHY TESLA'S AUTOPILOT CAN'T SEE A STOPPED FIRETRUCK
https://www.wired.com/story/tesla-autopilot-why-crash-radar/

Of course, there may be others we haven't heard about, here or in other countries. Anyway, if A/P was in use in all of these cases, January to May is 4 months, May to August is 3 months, August to May is nine months, for an average of 5 1/3rd months between such crashes. Not that we should draw major conclusions about frequency from such a small data set.
 
Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a24511826/safety-features-automatic-braking-system-tested-explained/

It has nothing to do with threat detection of a crossing vehicle (requires path prediction).
AEB systems can be capable of both crash avoidance and mitigation; avoidance is obviously preferred, mitigation is next best. For instance, CR from last November:
New Study Shows Automatic Braking Significantly Reduces Crashes and Injuries
https://www.consumerreports.org/aut...king-reduces-car-crashes-injuries-iihs-study/

General Motors vehicles with forward collision warning (FCW) and automatic emergency braking (AEB) saw a big drop in police-reported front-to-rear crashes when compared with the same cars without those systems, according to a new report by the Insurance Institute for Highway Safety (IIHS).

Those crashes dropped 43 percent, the IIHS found, and injuries in the same type of crashes fell 64 percent. . . .

These findings were in line with previous findings by the IIHS. In earlier studies involving Acura, Fiat Chrysler, Honda, Mercedes-Benz, Subaru and Volvo vehicles, it found that the combination of FCW and AEB reduced front-to-rear crash rates by 50 percent for all crashes, and 56 percent for the same crashes with injuries.
As to crossing vehicles requiring path prediction, no, that's not necessary, although it's certainly helpful. As I pointed out previously, NHTSA found the issue with current AEBs in that situation is not one of target detection, it's classification. Current AEB radar systems are told to ignore braking for large, flat zero-doppler objects because they can be nothing more than highway signs on overpasses or off to the side on curves (or overpass supports, FTM); a human would recognize what they are and not brake for them, but current AEB systems aren't that smart. The Mobileye EyeQ visual system in use by Tesla and others at the time also made use of a library of objects, and the library didn't contain side views of such objects (apparently because that was beyond the capabilities of the system at the time).

Oils4AsphaultOnly said:
A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.
As pointed out just above and previously, the reason current AEB systems don't work for either crossing or stopped vehicles is the same, a classification rather than detection issue. Lack of side skirts for detection isn't the problem, teaching the AEB to classify a crossing vehicle as a threat instead of ignoring it as harmless is. Here's the product spec sheet for one such radar (note the vertical FoV, ample to pick up the entire side of a trailer and then some at detection distances): https://www.bosch-mobility-solution...t-data-sheet-mid-range-radar-sensor-(mrr).pdf

Oils4AsphaultOnly said:
And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).
Are you suggesting that Teslas don't have the data to know which road they're on despite the lack of high-def digital mapping, when they can not only map out a route while choosing the type of roads to take and then follow that route, and they also know the speed limit of the different sections of that route? That's ridiculous. But let's say that you're right, and A/P is incapable of doing that. Since limiting the system's use only to those situations which it is capable of dealing with and preventing its usage in those which it can't is obviously the safest approach, should any company be required to adopt the latter approach to minimize the risk to both its customers and the general public? You consider Supercruise to be limited in where it can be used, and it is. To be specific, it's limited to ensure the safest possible performance, and I have no problem at all with that; indeed, I celebrate them for doing so, and wish Tesla acted likewise.

Oils4AsphaultOnly said:
The driver versus pilot training analogy isn't even applicable, since sleeping at the wheel isn't a training issue.
Who was talking about sleeping at the wheel? Not I. I was talking about the lack of required initial training and testing in the system's capabilities and limitations as well as the lack of re-currency training; lacking those an autonomous system has to be idiot-proofed to a much higher level. We know that pilots, despite being a much more rigorously selected group than car buyers, still make mistakes due to misunderstanding automation system capabilities or through lack of practice, even though they are required to receive instruction and be tested on their knowledge, both initially and recurrently. As none of that is required of car buyers, you have to make it as hard as possible to misuse the system, which certainly includes preventing it from being used in situations outside of its capabilities.

Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.

Tesla had its first fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.

Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.
Waymo hasn't killed anyone, because it hasn't driven fast enough to do so. At 35mph, any non-pedestrian accidents would be non-fatal. Granted they've tackled the more difficult task of street driving, but their accident stats aren't directly comparable to Tesla's. I only brought them up to highlight the difference in scale of where their systems can be applied.
Who says Waymo has only tested on public roads at slow speeds? I mentioned previously that while they were testing their ADAS systems (in 2012, before abandoning any such system as not being safer than a human), including on freeways, they observed exactly the same human misbehavior that A/P users have exhibited from the moment of its introduction up to the present. That included one employee fast asleep on the freeway. A correction, in my earlier reference I mis-remembered that the car had been going 65 for 1/2 hour. Checked my source, and I see it was 60 mph for 27 minutes, which is certainly fast enough to be fatal. They've continued testing on freeways since then, but have only deployed AV systems for public use where speeds are more limited (still with safety drivers, although that essentially serves as elephant repellent), precisely because they consider that it's necessary to walk before they run. I am wholly in favor of this approach.

Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
Funny. I wrote that to mean Tesla's method of iterating improvements and functionality into A/P, then NoA, and eventually FSD. You read it to mean Waymo's method of iterating from one geo-fenced city at a time.

Which just brings us all back to my old point of speed of deployment. Waymo's method would take YEARS (if not decades) to successfully deploy, and during that time, thousands of lives will be lost that could've been saved with a method that reaches FSD faster. At least 3 lives have been saved (all those DUI arrests) due to A/P so far, not counting any unreported ones where the driver made it home without being arrested. Eventually, you'll see things my way, you just don't know it yet. ;-)
And that brings me back to my and CR's and every other safety organization's point, so I'll repeat it:
[David Friedman, former Acting NHTSA Administrator, now employed by CR] instead of treating the public like guinea pig, Tesla must clearly demonstrate a driving automation system that is substantially safer than what is available today, based on rigorous evidence that is transparently shared with regulators and consumers, and validated by independent third-parties. In the meantime, the company should focus on making sure that proven crash avoidance technologies on Tesla vehicles, such as automatic emergency braking with pedestrian detection, are as effective as possible.”

Tesla's claims of increased safety remain unverified. As more and more Teslas are out there and they get into more and more accidents, I imagine the costs of fighting all the A/P lawsuits as well as the resulting big payouts will force them to clean up their act, if regulators don't. Until they (and any other company making such claims) do that, it's so much hot air. As it is, their ADAS system's design is inherently less safe than what currently appears to be the best extant, Supercruise, and needs to be improved to bring it up to something approaching that level. Government regulation mandating minimum acceptable equipment/performance standards is needed in this area, much as it is in aviation e.g. RNP (Required Navigation Performance) or RVSM (Reduced Vertical Separation Minimum).

Aside from limiting ADAS usage to limited-access freeways until such time as Tesla (or any company) can show that their system is capable of safely expanding beyond them, they need to shorten the hands-off warning time, from 24 seconds down to something around Supercruise's 4 seconds (somewhere way uptopic, I said I thought anything over 3 seconds was excessive if you're serious about keeping drivers engaged, and would still like to see that). For comparison, Google used a 6 second warning time back in 2012 in their ADAS system, and as we know Tesla essentially didn't have one at all until after the Brown crash, and it remains far too long*. Also, since we know that steering wheel weight/torque sensors can be easily fooled and that people are in fact doing so, adding eye-tracking cameras and the appropriate computer/software, or equipment of which can be shown to be of equal or greater effectiveness in keeping drivers engaged, should be required. Personally, if I though it was safe and legal I'd be in favor of the "pay attention" warning being given by a small shock to the driver, but that's obviously not going to happen. Naturally, all such such systems must collect data and have it publicly accessible so that actual performance and safety benefits can be compared, so as to allow regulations to be improved and safety increased.

We've completed yet another argument cycle, so as you gave me the last word last round, you get the last word this one. I'm sure another round will start in the near future.

*One thing, I asked uptopic how it was possible for Brenner to engage A/P and be going 13 mph over the speed limit when A/P was supposed to have been modified to limit its use to no more than 5 mph over the speed limit. I never got an answer. ISTM that there are three possibilities, but this is one question where hands-on knowledge of current A/P is definitely valuable, and I lack that.

Anyway, can A/P be engaged even though it's traveling at a speed well above the speed limit + 5 mph, and it will then gradually slow to that speed? Given the short time span between engagement and Brenner's crash, that might explain how he was able to engage it and be going that fast at impact.

Or should it not have been possible to engage A/P while traveling so much over A/P's allowed speed (a far safer approach), but for some reason the system failed to work as designed?

Or has Tesla eliminated the speed limit + 5 mph limitation they added after Brown's crash, and I missed it?
 
GRA said:
Oils4AsphaultOnly said:
GRA said:
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
You're conflating multiple incongruent issues again. AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking. https://www.caranddriver.com/features/a24511826/safety-features-automatic-braking-system-tested-explained/

It has nothing to do with threat detection of a crossing vehicle (requires path prediction).
AEB systems can be capable of both crash avoidance and mitigation; avoidance is obviously preferred, mitigation is next best. For instance, CR from last November:
New Study Shows Automatic Braking Significantly Reduces Crashes and Injuries
https://www.consumerreports.org/aut...king-reduces-car-crashes-injuries-iihs-study/

General Motors vehicles with forward collision warning (FCW) and automatic emergency braking (AEB) saw a big drop in police-reported front-to-rear crashes when compared with the same cars without those systems, according to a new report by the Insurance Institute for Highway Safety (IIHS).

Those crashes dropped 43 percent, the IIHS found, and injuries in the same type of crashes fell 64 percent. . . .

These findings were in line with previous findings by the IIHS. In earlier studies involving Acura, Fiat Chrysler, Honda, Mercedes-Benz, Subaru and Volvo vehicles, it found that the combination of FCW and AEB reduced front-to-rear crash rates by 50 percent for all crashes, and 56 percent for the same crashes with injuries.
As to crossing vehicles requiring path prediction, no, that's not necessary, although it's certainly helpful. As I pointed out previously, NHTSA found the issue with current AEBs in that situation is not one of target detection, it's classification. Current AEB radar systems are told to ignore braking for large, flat zero-doppler objects because they can be nothing more than highway signs on overpasses or off to the side on curves (or overpass supports, FTM); a human would recognize what they are and not brake for them, but current AEB systems aren't that smart. The Mobileye EyeQ visual system in use by Tesla and others at the time also made use of a library of objects, and the library didn't contain side views of such objects (apparently because that was beyond the capabilities of the system at the time).

Oils4AsphaultOnly said:
A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.
As pointed out just above and previously, the reason current AEB systems don't work for either crossing or stopped vehicles is the same, a classification rather than detection issue. Lack of side skirts for detection isn't the problem, teaching the AEB to classify a crossing vehicle as a threat instead of ignoring it as harmless is. Here's the product spec sheet for one such radar (note the vertical FoV, ample to pick up the entire side of a trailer and then some at detection distances): https://www.bosch-mobility-solution...t-data-sheet-mid-range-radar-sensor-(mrr).pdf

Tesla split with Mobileye back in 2016. There are 2x as many cars that don't use it for object classification. Most Tesla's are currently using the Nvidia GPU and software to handle object detection AND classification, while everyone else relies on mobileye. Although the root cause might still be the same, you can't rely on GM's results and past NHTSA findings to determine what's the flaw that needs fixing with Teslas.

Going forward, thanks to the processing capabilities of their new "TPU", there will be a different software version that handles object detection and classification. Again, because it's not the same, results may vary, so its performance needs to be determined on its own.

GRA said:
Oils4AsphaultOnly said:
And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).
Are you suggesting that Teslas don't have the data to know which road they're on despite the lack of high-def digital mapping, when they can not only map out a route while choosing the type of roads to take and then follow that route, and they also know the speed limit of the different sections of that route? That's ridiculous. But let's say that you're right, and A/P is incapable of doing that. Since limiting the system's use only to those situations which it is capable of dealing with and preventing its usage in those which it can't is obviously the safest approach, should any company be required to adopt the latter approach to minimize the risk to both its customers and the general public? You consider Supercruise to be limited in where it can be used, and it is. To be specific, it's limited to ensure the safest possible performance, and I have no problem at all with that; indeed, I celebrate them for doing so, and wish Tesla acted likewise.

No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).

For example, reading of the speed limit signs was a teaching task of presenting the machine with thousands of pictures of what a 35mph speed limit sign looked like, and the neural net would devise the code for interpreting the camera data (streams of rgb values on a point grid) to ferret out the speed limit. There is no image classification for "street" versus "highway" versus "freeway". So telling A/P not to engage on the road its driving entails oversight code that doesn't exist. NoA, on the other hand, does seem to have some sort of oversight code, because it actively warns about leaving the freeway and disengages itself near the head of an off-ramp. After which, A/P takes over to handle staying within the lane and keeping distance from the car ahead.


GRA said:
We've completed yet another argument cycle, so as you gave me the last word last round, you get the last word this one. I'm sure another round will start in the near future.

I've already said my piece. We disagreed. There's no last word to be had.

GRA said:
*One thing, I asked uptopic how it was possible for Brenner to engage A/P and be going 13 mph over the speed limit when A/P was supposed to have been modified to limit its use to no more than 5 mph over the speed limit. I never got an answer. ISTM that there are three possibilities, but this is one question where hands-on knowledge of current A/P is definitely valuable, and I lack that.

Anyway, can A/P be engaged even though it's traveling at a speed well above the speed limit + 5 mph, and it will then gradually slow to that speed? Given the short time span between engagement and Brenner's crash, that might explain how he was able to engage it and be going that fast at impact.

Or should it not have been possible to engage A/P while traveling so much over A/P's allowed speed (a far safer approach), but for some reason the system failed to work as designed?

Or has Tesla eliminated the speed limit + 5 mph limitation they added after Brown's crash, and I missed it?

The A/P speed limit default is set to the posted speed limit. The "default" can be adjusted from -20mph all the way up to +30mph over the posted speed limit via the menu. Its recommended setting is no more than +5mph over. I've never tried setting the default to anything above 5mph, so I don't know if A/P would even accept it. BUT, in addition to changing the default, the driver can also override the A/P speed AFTER A/P has been enabled. The driver is always ultimately in control and has final say over everything (even with Obstacle Acceleration Limit set). The absolute max A/P speed is 90mph (TX has some fairly high speed limits).

And yes, you can engage A/P at any speed all the way up to the absolute max of 90mph. A/P's job is to keep within the lane lines and within a relative space from the moving vehicle ahead. If you engage A/P, while exceeding the configured default speed (but still under 90mph), it will hold that speed until something changes it (posted speed limit change, traffic). If the A/P default speed limit configuration is set to absolute mode instead of relative, I believe it would NOT adjust the speed, but haven't confirmed. All of this _could_ change with the next A/P update, but not likely to.

Since they didn't say that Brenner raised the A/P speed limit, he was probably already traveling at 68mph when he engaged A/P and never adjusted it until the crash.
 
Oils4AsphaultOnly said:
No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).

Yes, but an A/P oversight system running in parallel could disable A/P under certain conditions. Sorry Tesla, even level 5 (FSD) will need
an oversight/override system (fail-safe system).

Oils4AsphaultOnly said:
For example, reading of the speed limit signs was a teaching task of presenting the machine with thousands of pictures of what a 35mph speed limit sign looked like, and the neural net would devise the code for interpreting the camera data (streams of rgb values on a point grid) to ferret out the speed limit. There is no image classification for "street" versus "highway" versus "freeway". So telling A/P not to engage on the road its driving entails oversight code that doesn't exist. NoA, on the other hand, does seem to have some sort of oversight code, because it actively warns about leaving the freeway and disengages itself near the head of an off-ramp. After which, A/P takes over to handle staying within the lane and keeping distance from the car ahead.

A key ongoing Tesla A/P shortcoming, to the detriment of A/P users! Again, the arrogance/naivete of Tesla to not have resolved this
is incredible, given the so-called years of A/P "advancements". Another problem for the NTSB to address besides the Boeing 737
autopilot safety problem?
 
lorenfb said:
Oils4AsphaultOnly said:
No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).

Yes, but an A/P oversight system running in parallel could disable A/P under certain conditions. Sorry Tesla, even level 5 (FSD) will need
an oversight/override system (fail-safe system).

That is flat-out suicidal. You don't hand the computer full control and then yank it away under "certain conditions". You can do that with ADAS, because the human driver is supposed to be in control, but in level 5, where the passengers can be asleep?! You are reckless with such an ignorant idea.

lorenfb said:
Oils4AsphaultOnly said:
For example, reading of the speed limit signs was a teaching task of presenting the machine with thousands of pictures of what a 35mph speed limit sign looked like, and the neural net would devise the code for interpreting the camera data (streams of rgb values on a point grid) to ferret out the speed limit. There is no image classification for "street" versus "highway" versus "freeway". So telling A/P not to engage on the road its driving entails oversight code that doesn't exist. NoA, on the other hand, does seem to have some sort of oversight code, because it actively warns about leaving the freeway and disengages itself near the head of an off-ramp. After which, A/P takes over to handle staying within the lane and keeping distance from the car ahead.

A key ongoing Tesla A/P shortcoming, to the detriment of A/P users! Again, the arrogance/naivete of Tesla to not have resolved this
is incredible, given the so-called years of A/P "advancements". Another problem for the NTSB to address besides the Boeing 737
autopilot safety problem?

I thought you were nauseous about the A/P discussion? Anyway, I'm going to ignore your rant, because I have nothing polite nor constructive to add.
 
I've reached the limits of my tolerance in this thread for the horsesh and bullsh from non-owners with no actual road knowledge of what they speak. I'll continue to thoroughly enjoy my Model 3 with EAP and FSD and relish the continued improvements over time. G'bye.
 
Oils4AsphaultOnly said:
No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).
lorenfb said:
Yes, but an A/P oversight system running in parallel could disable A/P under certain conditions. Sorry Tesla, even level 5 (FSD) will need
an oversight/override system (fail-safe system).
Oils4AsphaultOnly said:
That is flat-out suicidal. You don't hand the computer full control and then yank it away under "certain conditions". You can do that with ADAS, because the human driver is supposed to be in control, but in level 5, where the passengers can be asleep?! You are reckless with such an ignorant idea.

Oh please!

Obviously you'd notify the driver and request the driver's involvement. The protocol/procedure can easily be defined, e.g. once a warning
and no response, have the vehicle slowly pull over and stop safety. Hardly a problem! Surely with your understanding of Tesla's A/P system,
you could conceive of an integrated system design, right? If Tesla is incapable of being proactive with solving the A/P's shortcomings and
possible failure modes, then hopefully the NTSB will mandate a fail-safe system! That should be the case for any auto manufacturer
that integrates a FSD system into its vehicles.
 
lorenfb said:
Oils4AsphaultOnly said:
No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).
lorenfb said:
Yes, but an A/P oversight system running in parallel could disable A/P under certain conditions. Sorry Tesla, even level 5 (FSD) will need
an oversight/override system (fail-safe system).
Oils4AsphaultOnly said:
That is flat-out suicidal. You don't hand the computer full control and then yank it away under "certain conditions". You can do that with ADAS, because the human driver is supposed to be in control, but in level 5, where the passengers can be asleep?! You are reckless with such an ignorant idea.

Oh please!

Obviously you'd notify the driver and request the driver's involvement. The protocol/procedure can easily be defined, e.g. once a warning
and no response, have the vehicle slowly pull over and stop safety. Hardly a problem! Surely with your understanding of Tesla's A/P system,
you could conceive of an integrated system design, right? If Tesla is incapable of being proactive with solving the A/P's shortcomings and
possible failure modes, then hopefully the NTSB will mandate a fail-safe system! That should be the case for any auto manufacturer
that integrates a FSD system into its vehicles.

Oh please yourself. Educate yourself on what level 5 means first, then we can have a serious discussion (hint: what will the car do when there's no driver?). Your condescension is not appreciated.
 
Oils4AsphaultOnly said:
lorenfb said:
Oils4AsphaultOnly said:
No, I'm saying you don't inject human code into a machine self-taught algorithm. You're thinking it's all procedural code (e.g if-then-else), when the code for the driving (steering, accelerating, braking) was most likely machine learned (the object detection/classification is definitely self-taught - that's what the entire autonomy day presentation was about).
lorenfb said:
Yes, but an A/P oversight system running in parallel could disable A/P under certain conditions. Sorry Tesla, even level 5 (FSD) will need
an oversight/override system (fail-safe system).
Oils4AsphaultOnly said:
That is flat-out suicidal. You don't hand the computer full control and then yank it away under "certain conditions". You can do that with ADAS, because the human driver is supposed to be in control, but in level 5, where the passengers can be asleep?! You are reckless with such an ignorant idea.

Oh please!

Obviously you'd notify the driver and request the driver's involvement. The protocol/procedure can easily be defined, e.g. once a warning
and no response, have the vehicle slowly pull over and stop safety. Hardly a problem! Surely with your understanding of Tesla's A/P system,
you could conceive of an integrated system design, right? If Tesla is incapable of being proactive with solving the A/P's shortcomings and
possible failure modes, then hopefully the NTSB will mandate a fail-safe system! That should be the case for any auto manufacturer
that integrates a FSD system into its vehicles.

Oh please yourself. Educate yourself on what level 5 means first, then we can have a serious discussion (hint: what will the car do when there's no driver?). Your condescension is not appreciated.

You are aware that presently Tesla's machine generated code is not exclusive to the A/P function, i.e. there's a control store that executes
written code based on decisions of the neural network. This written code commands the various vehicle ECUs, e.g. the motor, the steering,
ABS/traction control, etc, and affects how the AI output (neural network) is to be implemented via the ECUs.

https://www.quora.com/What-programming-language-does-Tesla-write-Autopilot-in

The autopilot team in Tesla is a small but strong team of experts, who excel in computer vision and AI. The programming languages are most likely C or C++ / ASM. As I mentioned earlier, your core skills and previous experience matters. The knowledge of the language gives you an upperhand in the job and they matter in practice.

https://medium.com/self-driving-cars/c-vs-python-for-automotive-software-40211536a4ad

https://medium.com/@olley_io/what-software-do-autonomous-vehicle-engineers-use-part-1-2-275631071199

All participants involved in the manufacture and operation of autonomous vehicles agree that a set of universal standards must be developed to ensure the safety of passengers and other traffic participants. However, those discussions are still in the early stages and will likely take a rather long period of time to fully develop, which could directly delay the availability of SAE Level 4 or above autonomous vehicles.

https://medium.com/@miccowang/autonomous-driving-how-autonomous-and-when-ce08182cfaeb

Considering Tesla CEO Elon Musk has been recently saying that future self-driving capabilities will only work with a new computer to be released in the Autopilot Hardware 3.0 upgrade, this updated language makes it sounds like Tesla is now installing the new computer in “new” cars.

We contacted Tesla to clarify the situation about Hardware 3.0 and will report back if we hear anything. As far as we can tell, it is still not shipping in new cars.

Furthermore, Tesla updated other parts of its Autopilot and self-driving capability language to walk back some features, like the potential for self-driving to work with upcoming automated charging station in order to charge without human intervention.

The automaker also removed any mention of the Tesla Network, the company’s self-driving ride-hailing network.

It also added the requirement to have driver supervision even with full self-driving capability.

https://electrek.co/2019/03/06/tesla-self-driving-language-walks-back-features-confusion/
 
lorenfb said:
Oils4AsphaultOnly said:
lorenfb said:
Oh please!

Obviously you'd notify the driver and request the driver's involvement. The protocol/procedure can easily be defined, e.g. once a warning
and no response, have the vehicle slowly pull over and stop safety. Hardly a problem! Surely with your understanding of Tesla's A/P system,
you could conceive of an integrated system design, right? If Tesla is incapable of being proactive with solving the A/P's shortcomings and
possible failure modes, then hopefully the NTSB will mandate a fail-safe system! That should be the case for any auto manufacturer
that integrates a FSD system into its vehicles.

Oh please yourself. Educate yourself on what level 5 means first, then we can have a serious discussion (hint: what will the car do when there's no driver?). Your condescension is not appreciated.

You are aware that presently Tesla's machine generated code is not exclusive to the A/P function, i.e. there's a control store that executes
written code based on decisions of the neural network. This written code commands the various vehicle ECUs, e.g. the motor, the steering,
ABS/traction control, etc, and affects how the AI output (neural network) is to be implemented via the ECUs.

Why are you trying so hard?! You don't even know what you're piecing together. At least admit that your earlier suggestion was just a joke and leave it at that. When you're stuck in a hole, stop digging!
 
Oils4AsphaultOnly said:
lorenfb said:
Oils4AsphaultOnly said:
Oh please yourself. Educate yourself on what level 5 means first, then we can have a serious discussion (hint: what will the car do when there's no driver?). Your condescension is not appreciated.

You are aware that presently Tesla's machine generated code is not exclusive to the A/P function, i.e. there's a control store that executes
written code based on decisions of the neural network. This written code commands the various vehicle ECUs, e.g. the motor, the steering,
ABS/traction control, etc, and affects how the AI output (neural network) is to be implemented via the ECUs.

Why are you trying so hard?! You don't even know what you're piecing together. At least admit that your earlier suggestion was just a joke and leave it at that. When you're stuck in a hole, stop digging!

Typical from you, it always comes to an ad hominem.
 
lorenfb said:
Oils4AsphaultOnly said:
lorenfb said:
You are aware that presently Tesla's machine generated code is not exclusive to the A/P function, i.e. there's a control store that executes
written code based on decisions of the neural network. This written code commands the various vehicle ECUs, e.g. the motor, the steering,
ABS/traction control, etc, and affects how the AI output (neural network) is to be implemented via the ECUs.

Why are you trying so hard?! You don't even know what you're piecing together. At least admit that your earlier suggestion was just a joke and leave it at that. When you're stuck in a hole, stop digging!

Typical from you, it always comes to an ad hominem.

Out of all my posts, and interactions with countless others, you're the only one that I treat so poorly. I wonder why?!
 
Oils4AsphaultOnly said:
There is no image classification for "street" versus "highway" versus "freeway". So telling A/P not to engage on the road its driving entails oversight code that doesn't exist.
There is no image classification to determine road type, but AP does distinguish between them non-the-less and enforces different rules/settings upon the type of road the car is on. I suspect this is still done with map data.

Two examples:

Today, when on a highway you can set the AP speed limit as high as you want. On a secondary road you are limited to 5mph above the speed limit from the map data (AFAIK, AP2 in my car still does not seem to be using the speed limit read from signs).

Prior to an early 9.0, my AP2 car would only allow auto lane change on limited-access highways. A local state route with a speed limit of 55MPH but with intersections and traffic lights would be treated as a highway for the speed limit, but was prevented from doing a lane change.


One thing that's remarkable is that just 2 years I got the first OTA in the car that allowed auto-steer at highway speeds (over 55 MPH). AP2 has vastly improved from where it was when I first got the car. E.g., I remember reading this: https://electrek.co/2017/03/08/tesla-autopilot-2-0-speed-limit-update/
 
SalisburySam said:
from non-owners with no actual road knowledge of what they speak.
This thread often makes me think of: https://en.wikipedia.org/wiki/Blind_men_and_an_elephant
In some versions, they come to suspect that the other person is dishonest and they come to blows. The moral of the parable is that humans have a tendency to claim absolute truth based on their limited, subjective experience as they ignore other people's limited, subjective experiences which may be equally true.
 
jlv said:
Oils4AsphaultOnly said:
There is no image classification for "street" versus "highway" versus "freeway". So telling A/P not to engage on the road its driving entails oversight code that doesn't exist.
There is no image classification to determine road type, but AP does distinguish between them non-the-less and enforces different rules/settings upon the type of road the car is on. I suspect this is still done with map data.

Two examples:

Today, when on a highway you can set the AP speed limit as high as you want. On a secondary road you are limited to 5mph above the speed limit from the map data (AFAIK, AP2 in my car still does not seem to be using the speed limit read from signs).

Prior to an early 9.0, my AP2 car would only allow auto lane change on limited-access highways. A local state route with a speed limit of 55MPH but with intersections and traffic lights would be treated as a highway for the speed limit, but was prevented from doing a lane change.


One thing that's remarkable is that just 2 years I got the first OTA in the car that allowed auto-steer at highway speeds (over 55 MPH). AP2 has vastly improved from where it was when I first got the car. E.g., I remember reading this: https://electrek.co/2017/03/08/tesla-autopilot-2-0-speed-limit-update/

Wasn't aware of this. Good to know. As A/P isn't sanctioned for surface streets yet (and with no rural highways nearby), I've never tried to enable it there.
 
Gotta love when guys like this https://teslamotorsclub.com/tmc/threads/autopilot-bobbing-side-to-side.158602/ are filming themselves (presumably holding a phone or a camera with one hand, judging by the motion) using autopilot on either an S or X going 90 mph in Florida!

Looks like the highest speed limit in in Florida per https://www.fdot.gov/traffic/faqs/speedlimitfaq.shtm is only 70 mph.
 
Back
Top