Tesla's autopilot, on the road

My Nissan Leaf Forum

Help Support My Nissan Leaf Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
GRA said:
johnlocke said:
While I agree that ACC when combined with lane keeping isn't a good idea, that wasn't what i was suggesting. Just ACC or even just anti-collision braking. While avoiding a stopped vehicle after the car in front swerves out of the way is a problem, at least ACC would reduce the impact if not avoiding it entirely. I suspect that ACC would have better reaction times than a human driver. A far more likely scenario is stop and go freeway driving where ACC is a real boon and even anti-collision braking would help.

While DAS equipped cars have hit stationary objects, it's also true for ordinary drivers. Whenever a Tesla hits something it seems to be a big deal. Police, firemen, and ambulance drivers regularly get sideswiped by ordinary drivers who ought to watching out for them but don't. While it has been said that Good is the enemy of Perfection, it's also true that good is better than nothing.


Yes, humans have hit stopped vehicles, the issue is two-fold. When a human hits a stopped vehicle, there's no question where the responsibility lies absent mechanical failure, but throw in automated systems that suggest but don't ensure that the human remains in control and the responsibility is muddied. Tesla, for one exploits this with a 'heads we win' ("AP prevented a collision"), 'tails you lose' ("if the car collides with something while on A/P, it's solely the driver's fault") attitude.

The other issue is does ACC/AEB actually reduce the number of collisions of this particular type? I' not aware of any statistics either way. More important to me is that we establish responsibility on both sides, which is why I'm against the use of any DAS system until such time as the manufacturer is willing to accept full legal responsibility for any accident caused while the car is driving itself.
For the first case, pass a law saying that if you're in the car, it is your responsibility. If the car is not occupied then the manufacturer is liable.

I'm not sure how you would go about proving AEB is actually safer without a lot more cars equipped with the feature, Hard to prove a negative result (no accident) without reams of data. The IIHS estimates that if all manufacturers installed AEB, it could prevent 42,000 crashes and 20,000 injuries by 2025 so they must have some data to back that up. 12 major auto manufacturers are now equipping their cars with AEB or ACC so someone thinks it works.

It is clear that some Teslas have prevented an accident with AEB. You can see the U-tube videos. It's also clear that Tesla's system isn't perfect. If you are worried about assigning responsibility, pass a law to make one party or the other responsible as you see fit. Nothing improves without some trial and error. Some prodding along the way doesn't hurt either. I bet that if you made owners 100% responsible under all conditions, you could hear them screaming at the manufacturers quite clearly.
 
johnlocke said:
GRA said:
johnlocke said:
While I agree that ACC when combined with lane keeping isn't a good idea, that wasn't what i was suggesting. Just ACC or even just anti-collision braking. While avoiding a stopped vehicle after the car in front swerves out of the way is a problem, at least ACC would reduce the impact if not avoiding it entirely. I suspect that ACC would have better reaction times than a human driver. A far more likely scenario is stop and go freeway driving where ACC is a real boon and even anti-collision braking would help.

While DAS equipped cars have hit stationary objects, it's also true for ordinary drivers. Whenever a Tesla hits something it seems to be a big deal. Police, firemen, and ambulance drivers regularly get sideswiped by ordinary drivers who ought to watching out for them but don't. While it has been said that Good is the enemy of Perfection, it's also true that good is better than nothing.


Yes, humans have hit stopped vehicles, the issue is two-fold. When a human hits a stopped vehicle, there's no question where the responsibility lies absent mechanical failure, but throw in automated systems that suggest but don't ensure that the human remains in control and the responsibility is muddied. Tesla, for one exploits this with a 'heads we win' ("AP prevented a collision"), 'tails you lose' ("if the car collides with something while on A/P, it's solely the driver's fault") attitude.

The other issue is does ACC/AEB actually reduce the number of collisions of this particular type? I' not aware of any statistics either way. More important to me is that we establish responsibility on both sides, which is why I'm against the use of any DAS system until such time as the manufacturer is willing to accept full legal responsibility for any accident caused while the car is driving itself.
For the first case, pass a law saying that if you're in the car, it is your responsibility. If the car is not occupied then the manufacturer is liable.

I'm not sure how you would go about proving AEB is actually safer without a lot more cars equipped with the feature, Hard to prove a negative result (no accident) without reams of data. The IIHS estimates that if all manufacturers installed AEB, it could prevent 42,000 crashes and 20,000 injuries by 2025 so they must have some data to back that up. 12 major auto manufacturers are now equipping their cars with AEB or ACC so someone thinks it works.

It is clear that some Teslas have prevented an accident with AEB. You can see the U-tube videos. It's also clear that Tesla's system isn't perfect. If you are worried about assigning responsibility, pass a law to make one party or the other responsible as you see fit. Nothing improves without some trial and error. Some prodding along the way doesn't hurt either. I bet that if you made owners 100% responsible under all conditions, you could hear them screaming at the manufacturers quite clearly.


We're not talking about AEB, we're talking about ACC. The IIHS data is clear that AEB has reduced accidents, which is why virtually all the manufacturers agreed to install it on every car by 2023 in the U.S., and it's apparently running well ahead of schedule: https://www.nhtsa.gov/press-releases/nhtsa-announces-2020-update-aeb-installation-20-automakers

As I wrote before I'm completely in favor or making AEB mandatory, because it provides an additional layer of safety on top of the driver - there is no doubt at all who's got the ultimate responsibility, nor is there any incentive to treat it as primary and trust their lives with it. Calling it "Automatic EMERGENCY Braking" may have a lot to do with that, as that makes it clear that it's a last hope.

OTOH, current IIHS data for ACC paints a very muddled picture:
Adaptive cruise control spurs drivers to speed

https://www.iihs.org/news/detail/adaptive-cruise-control-spurs-drivers-to-speed


Drivers are using adaptive cruise control (ACC) as a tool for speeding, possibly undermining the feature’s potential safety benefits, a new study from the Insurance Institute for Highway Safety found.

Drivers are substantially more likely to speed when using ACC or partial automation that combines that feature with lane centering than when not using either technology, the study showed. When selecting a speed to “set and forget,” many drivers choose one that’s over the limit.

“ACC does have some safety benefits, but it’s important to consider how drivers might cancel out these benefits by misusing the system,” says IIHS Statistician Sam Monfort, the lead author of the paper. “Speed at impact is among the most important factors in whether or not a crash turns out to be fatal. . . .”

The systems on the market today don’t restrict drivers from setting speeds that are higher than the legal limit, and they require constant supervision by the driver because they’re not capable of handling certain common road features and driving scenarios.

Nevertheless, an analysis of insurance claims data by the IIHS-affiliated Highway Loss Data Institute and other research indicate that ACC may lower crash risk. Other studies have shown that these systems maintain a greater following distance at their default settings than most human drivers and suggested that they reduce the frequency of passing and other lane changes.

To find out the impact ACC and lane centering technologies have on speeding, IIHS researchers analyzed the behavior of 40 drivers from the Boston metro area over a four-week period using data collected by the Massachusetts Institute of Technology’s Advanced Vehicle Technology Consortium. These drivers were provided with a 2016 Land Rover Range Rover Evoque outfitted with ACC or with a 2017 Volvo S90 equipped with ACC and Pilot Assist — a partial automation system that combines ACC with lane centering. The data suggest that drivers were 24 percent more likely to drive over the speed limit on limited-access highways when those systems were turned on. The amount by which they exceeded the speed limit when they did speed was also greater when they were using the driver assistance features compared with driving manually.

Whether driving manually or using ACC or Pilot Assist, speeders exceeded the limit by the largest margin in zones with a 55 mph limit. In these areas, speeders averaged about 8 mph over the limit, compared with 5 mph in 60 mph and 65 mph zones. ACC also had the largest impact on how much they exceeded the limit in zones where it was 55 mph. In these slower zones, they averaged a little more than 1 mph higher over the limit when using ACC or Pilot Assist than they did driving manually.

That 1 mph increase may not sound like much. Leaving aside any other effect these features may have on crash risk, however, it means ACC and partial automation users are at about 10 percent higher risk of a fatal crash, according to a common formula for calculating probable crash outcomes. This study did not analyze real-world crashes.

The study did not account for several other factors that have been shown to reduce crash frequency and severity. For instance, it’s possible that drivers who set their systems at higher speeds also selected a greater following distance. ACC systems are also designed to respond sooner and less abruptly than human drivers when the vehicle ahead slows down.

Future research will need to balance these benefits against the effects of excess speeding to fully understand the technology’s impact on safety. Making systems more restrictive might be the answer, provided that limiting the maximum speed or linking it to posted limits doesn’t discourage risky drivers from using ACC altogether.

Both of the tested systems also allow drivers to bump their selected speed up or down by 5 mph increments at the touch of a button, which might at least partially explain why users exceeded the legal limit by larger amounts when they had the feature switched on.


Now, allowing an ACC to set speeds significantly above the speed limit is clearly dangerous; ~5 mph allows cars using cruise control to keep up with the human flow of traffic without being far faster than it. OTOH< Tesla for one has gone back and forth about allowing drivers to set any speed they want, even when using A/P or FSD and the car knows what the speed limit is, which is one of the examples of NHTSA simply not doing their job for years.
 
My experience with cruise control is that you tend to set it slightly higher then you would drive manually simply to avoid being passed on both the left and right by other drivers. Driving in San Diego at 5 miles over the limits causes other drivers to regularly pass you, LA is even worse. If I'm going to use cruise control, I want to sit in a middle lane and not have change lanes very often. 7-8 mi over the limit is just about right. Still get passed by a lot of cars. By the way, all the cruise control systems I've ever used allowed 1 mph increments.

If I'm driving manually, I might be a tad slower but I probably bounce up and down a lot more as i adjust to traffic flow. Not sure if either way is particularly safer.
 
I have no problem with cruise controls being able to be set somewhat over the speed limit on freeways and multi-lane divided highways, for the reason stated. As long as we've got to interact with human drivers who cruise over the speed limit, its safer to pace them than not. OTOH, there's no excuse for allowing smarter systems to speed on surface streets, which many systems do.
 
Received a push notification about this:
https://twitter.com/9NewsMelb/status/1525023806284234752?s=20
"9News Melbourne
@9NewsMelb
JUST IN: A Tesla on autopilot has crashed over a barrier in Docklands, holding up peak hour traffic.
@reid_butler9 #9News".

There's a brief video report at that tweet.
 
Tesla Autopilot Max Speed Increases To 85 MPH With Tesla Vision

An over-the-air software update will increase the max speed of Tesla-vision-based Autosteer from 80 to 85 mph.

https://insideevs.com/news/586679/tesla-autopilt/


A year ago, Tesla made the switch to its newer Tesla Vision camera-based advanced driver-assist systems. Previously, Tesla's cars also featured radar, which the automaker has since removed. Amid the switch to Tesla Vision, the top speed for Autopilot's Autosteer feature was reduced to 75 mph.

When Tesla first switched to the vision-only approach, its vehicles temporarily lost their top safety ratings, as well as an overall recommendation from some publications. This would give time for regulators and reviewers to test the vehicles with the updated system. In the end, the automaker was reinstated with its top safety ratings and various recommendations when the technology was proven to function properly without the radar.

Tesla remained cautious, however, and didn't decide to quickly raise the maximum Autosteer speed back up to 90 mph, which was the top speed for "traditional" Autopilot with radar. Tesla chose to raise it on vision-only cars from the initial 75 mph up to 80 mph rather quickly. Since then, there haven't been any maximum speed increases.

A few days ago, Tesla CEO Elon Musk responded to a tweet confirming that an upcoming update would, in fact, increase the speed of Autopilot. Musk didn't specify the top speed, though.

More recently, Electrek discovered an update on Tesla's website, to which the publication credited Artem Russakovskii (@ArtemR) due to the tweet below:

Tesla is finally bumping the max speed of Tesla Vision-based Autosteer from 80mph to 85mph.

On pre-Vision cars, this max is 90mph, so while the jump from 80 to 85 is much appreciated, it's not full parity yet. . . .


. . . Autosteer "will be limited to a maximum speed of 85 mph and a longer minimum following distance." The post also notes that Tesla will be "restoring" the features via multiple over-the-air software updates during the "weeks ahead." To be clear, the features may not be restored right away, and even if you take delivery of a brand-new Tesla, it may not yet have the updates. . . .
 
Andrej Karpathy took off from Tesla. See below. He was on sabbatical but I didn't follow the details of that.
https://twitter.com/karpathy/status/1547332300186066944
 
This is an interesting owner review of Tesla's "self driving" technology. Just watch how many times the driver has to intervene to prevent dangerously inappropriate behaviors of the system. This performance is at a level that should be quietly pursued in a research setting, not by customers on actual roadways, by any means.

[youtube]https://www.youtube.com/watch?v=ZLBR39RcyiU[/youtube]
 
With my software development hat on, we used to use QSM to help build our effort and resource estimations. One large factor in the effort/cost was industry. Business was the cheapest/easiest and planes/aircraft were one of most expensive given the difference in fault tolerance. (A web page crashes...oh well; flight systems crash...not good) .

I often wonder if Tesla is trying to apply a business oriented level of fault tolerance to its autopilot development, when something closer to an aircraft system level is more appropriate given the risks. It might sink their business model, but the brute force method of finding and fixing its going through is starting to wear thin on the public.
 
I totally agree.

IMO an avionics level software reliability standard would be the minimum requirement for any auto-pilot code in a car. And, I also think that the software for driving is a harder problem than avionics, since in the air, there is less traffic, no pedestrians, no other drivers doing random stuff that they shouldn't be doing, etc.

Maybe warfare would come close since then there is someone actively trying to create havoc but in general a plane flying through the sky has fewer things to worry about than a car on the highway, IMHO. Just because it is in 3D instead of 2D is a trivial concern compared to all the various factors a car on the roads needs to consider.

The big IF is how AI compares to something like DO-178B standards. I don't see any auto manufacturer pursuing the latter and I assume they are all doing the former. Mr Musk seems to think that AI is going to soon reach the 'singularity point' and take over the world. Of course, he also seems to think people are going to live on Mars in the near future. Time will tell.
 
goldbrick said:
I totally agree.

IMO an avionics level software reliability standard would be the minimum requirement for any auto-pilot code in a car. And, I also think that the software for driving is a harder problem than avionics, since in the air, there is less traffic, no pedestrians, no other drivers doing random stuff that they shouldn't be doing, etc.

Maybe warfare would come close since then there is someone actively trying to create havoc but in general a plane flying through the sky has fewer things to worry about than a car on the highway, IMHO. Just because it is in 3D instead of 2D is a trivial concern compared to all the various factors a car on the roads needs to consider.

The big IF is how AI compares to something like DO-178B standards. I don't see any auto manufacturer pursuing the latter and I assume they are all doing the former. Mr Musk seems to think that AI is going to soon reach the 'singularity point' and take over the world. Of course, he also seems to think people are going to live on Mars in the near future. Time will tell.

Avionics level of software reliability? Like on the 737-max that crashed two planes? Or like how the controls have 3 levels of redundancy built-in, because fault conditions are assumed?

SpaceX is another hardware company with software built around the same philosophy of "fail quickly". And they've proven that their system is more reliable and robust, even though they've had a TON of failures (first 3 rockets all failed, same with the first half-dozen landing attempts). Not to mention all the failed attempts to capture a payload fairing by boat, which ultimately lead to them scrapping the idea in place of just making the fairings sea-water resistant. It's not the development method that's at issue, it's their results (the Boeing starliner still has issues despite being years late). The agile method works. It's just a question of how many iterations it takes to get to a reliable system.

Risk-adverse methodologies don't lead to risk-free solutions, only solutions with hidden risks (cue 737-max). When you're developing mature tech like roads, bridges, and motors, then you can methodically cover all your bases. But when you're inventing something new, it's hubris to believe that you can know all the unknowns and "plan" for it. Relying on an "industry" standard for an immature industry is such hubris. I've been a computer systems engineering for over 2 decades, and I've seen all sorts of "industry leading standards", and they were all marketing baloney. Standards only evolve AFTER it's been proven to be robust and reliable, NOT the other way around.

Looking at a tslaq site tesladeaths.com (for the sake of disparaging Tesla, it's a site dedicated to recording all the vehicle deaths involving a Tesla, both with and without autopilot), it's pretty clear that the rate of deaths isn't growing at the same rate that the miles driven are growing (which is increasing at least 50% every year). Your lives are NOT being beta-tested against anymore than it already is in someone (drunk driver) else's hands.
 
Please be aware that currently, the only people allowed to beta-test FSD are people with safety scores above 91 (my buddy has a score of 91 and he hasn't been permitted access to FSD yet). Those people are well aware that they will be held liable for any accidents involving FSD. In that situation, do you really think you're going to be hit by a rogue FSD car because you're now an involuntary test subject? As long as FSD is still considered L2 ADAS, you're at more risk from your neighbors after a night out than FSD. Same with autopilot, which is already known to be only L2 ... [Edit] ... because the owners of the car (assuming they're sober) don't want the headache of an accident. And if an accident happens because the driver wasn't paying attention, or applied autopilot where it shouldn't be used, then imagine how bad of a driver they must've been to exercise such poor judgement and how many other accidents they might've caused in the past?
 
Tesla to increase cost of FSD beta software beyond $12,000
https://techcrunch.com/2022/07/20/tesla-to-increase-cost-of-fsd-beta-software-beyond-its-12000-price-tag/
"Musk did not provide guidance as to how much the price will increase, but he did say it is currently “ridiculously cheap.”"

LOL! Gotta love it! Get people to part with $12K to be paying unpaid beta testers, providing Tesla data for free AND shielding themselves from liability by hiding behind the "beta" label. And, it's ridiculously cheap for $12K?

Yes, $12K for him is pocket change.
 
IMHO, the breakthrough will come when we get the following developments :

1) Move away from human limitations... Tesla is adamant to keep its FSD based on visual input. This at best will get you human equivalent capability and even then humans can and regularly do look ahead to see problems, hard brake situations coming up say half a mile ahead however vision based FSD can only look at the car ahead. We should be looking at using all possible input / sensor technology to go beyond human abilities.

2) An industry standards body for FSD need to emerge which gets companies to collaborate so that there can be common standards that all co-develop and use. Commercial success for one company does not equal massively adoptable technology that all will agree and adopt. Tying with above point, one example to this could be when all FSD cars (regardless of maker) talk to each other within a radius of x miles sharing road conditions, speed and braking status etc. This could enable FSD cars to drive proactively similar to how a human looks ahead and slow down when he/she sees a barrage of red lights half a mile ahead.

3) Adding on above, we need a governmental standards body that drives the evolution of road signage & road design etc to improve interaction with FSD cars. Example could be simple things like RFID embedded road lane separators (those things that have light reflectors on them) so FSD cars can detect where the lanes are even when there is snow on the road. I can't even imagine how the roads and supporting tech will evolve but clearly it must go beyond what it is today, which has not changed much since horse drawn cart era.

4) Last but not least, we need a governance body that drives regulatory change on a consistent & Federal basis. Currently each state is on its own (afaik) with FSD and this will be a rate limiter. liability laws, insurance coverage, accountability matters all will be courtroom tested for some time and will need to settle into a clear and consistent set of rules & regulations so the FSD can become mainstream.... This will overreach will need to globalized at some point for the FSD acceptability to penetrate all corners of the world.

I think all of above will take more time than what we all would love to see and is not achievable by heroics of any single company. Until some of these fundamentals are addressed the best we can expect is a super-autopilot but nothing anywhere close to true FSD. Happy to be proven wrong in due course.
 
OldManCan said:
IMHO, the breakthrough will come when we get the following developments :

1) Move away from human limitations... Tesla is adamant to keep its FSD based on visual input. This at best will get you human equivalent capability and even then humans can and regularly do look ahead to see problems, hard brake situations coming up say half a mile ahead however vision based FSD can only look at the car ahead. We should be looking at using all possible input / sensor technology to go beyond human abilities.

2) An industry standards body for FSD need to emerge which gets companies to collaborate so that there can be common standards that all co-develop and use. Commercial success for one company does not equal massively adoptable technology that all will agree and adopt. Tying with above point, one example to this could be when all FSD cars (regardless of maker) talk to each other within a radius of x miles sharing road conditions, speed and braking status etc. This could enable FSD cars to drive proactively similar to how a human looks ahead and slow down when he/she sees a barrage of red lights half a mile ahead.

3) Adding on above, we need a governmental standards body that drives the evolution of road signage & road design etc to improve interaction with FSD cars. Example could be simple things like RFID embedded road lane separators (those things that have light reflectors on them) so FSD cars can detect where the lanes are even when there is snow on the road. I can't even imagine how the roads and supporting tech will evolve but clearly it must go beyond what it is today, which has not changed much since horse drawn cart era.

4) Last but not least, we need a governance body that drives regulatory change on a consistent & Federal basis. Currently each state is on its own (afaik) with FSD and this will be a rate limiter. liability laws, insurance coverage, accountability matters all will be courtroom tested for some time and will need to settle into a clear and consistent set of rules & regulations so the FSD can become mainstream.... This will overreach will need to globalized at some point for the FSD acceptability to penetrate all corners of the world.

I think all of above will take more time than what we all would love to see and is not achievable by heroics of any single company. Until some of these fundamentals are addressed the best we can expect is a super-autopilot but nothing anywhere close to true FSD. Happy to be proven wrong in due course.

Good luck with this. No government well-intentioned regulation ever survived a committee review intact nor on time.

The roads and rules were developed over a century with human drivers and human vision in mind. Why is it so wrong to expect autonomous driving systems to be able to handle it? It would be crazy expensive and time-consuming to embed transmitters and sensors into every drivable surface just to make it "easier" for car companies to develop their autonomous tech.

Google took over a decade to map most of the major streets in the country. The Federal government is running a deficit and you want us to spend more money, without any ROI for at least a decade, just so that autonomous tech companies can save R&D money? This is how corporate welfare grows.
 
Oils4AsphaultOnly said:
Please be aware that currently, the only people allowed to beta-test FSD are people with safety scores above 91 (my buddy has a score of 91 and he hasn't been permitted access to FSD yet). Those people are well aware that they will be held liable for any accidents involving FSD. In that situation, do you really think you're going to be hit by a rogue FSD car because you're now an involuntary test subject? ...

Yes.
 
Oils4AsphaultOnly said:
Good luck with this. No government well-intentioned regulation ever survived a committee review intact nor on time.

The roads and rules were developed over a century with human drivers and human vision in mind. Why is it so wrong to expect autonomous driving systems to be able to handle it? It would be crazy expensive and time-consuming to embed transmitters and sensors into every drivable surface just to make it "easier" for car companies to develop their autonomous tech.

Google took over a decade to map most of the major streets in the country. The Federal government is running a deficit and you want us to spend more money, without any ROI for at least a decade, just so that autonomous tech companies can save R&D money? This is how corporate welfare grows.

Thanks much for your counter-argument. Well received.

I agree these are hard developments and will take a long time and thats why I am not expecting to see Level 5 self drive anytime soon.

If you were to review my thoughts once again you may note that the government spending piece is only one part of it. Living in CA, like the rest of warm climate dwellers of the world, we might be able to enjoy vision based FSD one day. However, the world is big and there is a huge segment of population who have to drive on snow or other adverse conditions making visual driving challenging at best. Selling these people FSD cars which they can only use some portion of their year is not going to fly.

Please note my remarks are all for Level 5 autonomy, where there is no driver in the car and everyone are passengers. If this is the end goal then we do have some government spending required down the road but as you also suggest the main responsibility is on the autonomous tech companies to define and develop the capabilities and standards to allow Level 5 to become possible. Any change to do with road infrastructure is likely going to take many many years (if at all). Not suggesting we rip out billion miles of existing roads & start over.

On a related note, this is probably why we may see Level 5 personal flying machines before we can even agree on how a Level 5 car will operate... :D
 
OldManCan said:
Oils4AsphaultOnly said:
Good luck with this. No government well-intentioned regulation ever survived a committee review intact nor on time.

The roads and rules were developed over a century with human drivers and human vision in mind. Why is it so wrong to expect autonomous driving systems to be able to handle it? It would be crazy expensive and time-consuming to embed transmitters and sensors into every drivable surface just to make it "easier" for car companies to develop their autonomous tech.

Google took over a decade to map most of the major streets in the country. The Federal government is running a deficit and you want us to spend more money, without any ROI for at least a decade, just so that autonomous tech companies can save R&D money? This is how corporate welfare grows.

Thanks much for your counter-argument. Well received.

I agree these are hard developments and will take a long time and thats why I am not expecting to see Level 5 self drive anytime soon.

If you were to review my thoughts once again you may note that the government spending piece is only one part of it. Living in CA, like the rest of warm climate dwellers of the world, we might be able to enjoy vision based FSD one day. However, the world is big and there is a huge segment of population who have to drive on snow or other adverse conditions making visual driving challenging at best. Selling these people FSD cars which they can only use some portion of their year is not going to fly.

Please note my remarks are all for Level 5 autonomy, where there is no driver in the car and everyone are passengers. If this is the end goal then we do have some government spending required down the road but as you also suggest the main responsibility is on the autonomous tech companies to define and develop the capabilities and standards to allow Level 5 to become possible. Any change to do with road infrastructure is likely going to take many many years (if at all). Not suggesting we rip out billion miles of existing roads & start over.

On a related note, this is probably why we may see Level 5 personal flying machines before we can even agree on how a Level 5 car will operate... :D

Adverse conditions aren't impossible to navigate with vision alone. Millions of humans do it annually. Some conditions are just too treacherous for even human drivers, and the cars should pull-over and wait for the storm to pass, as should the humans.

But yes, if battery energy density improves quickly enough, autonomous flying vehicles will arrive before cars.
 
Oils4AsphaultOnly said:
Adverse conditions aren't impossible to navigate with vision alone. Millions of humans do it annually. Some conditions are just too treacherous for even human drivers, and the cars should pull-over and wait for the storm to pass, as should the humans.

But yes, if battery energy density improves quickly enough, autonomous flying vehicles will arrive before cars.

Agreed on the need to pull over but we all know it doesn't happen as much as it should.

Let's put aside the road infrastructure cost etc point for a second, why would we not want to supplement vision with other sensor abilities (ie LIDAR or future tech not yet here)? If we have the ability to add beyond-human capabilities to make the experience safer I'd be all for it.
 
Back
Top