GRA wrote:In short, AP should be limited in function and designed to operate like Cadillac's Supercruise.
I hope it never gets gutted like that. I use AP as it was designed to be used: "Always keep your hands on the wheel / Be prepared to take over at any time". Don't look away. Don't use an orange to defeat the "are you holding the wheel" checks. It isn't self-driving, don't get a false sense of security about it.
Great, now we just need to convince the rest of Tesla's customers to behave the same way, and as that's not going to happen, the correct way to deal with the issue is to prohibit its use in situations it isn't capable of handling. See the bolded
Which ignores why their opinion of AP changed. As they and others used it more they found out that it was allowed to do things it was incapable of doing safely. From July 14, 2016:
Tesla's Autopilot: Too Much Autonomy Too Soon
Consumer Reports calls for Tesla to disable hands-free operation until its system can be made safer
From September 12, 2017:
NTSB Puts Partial Blame on Tesla and Autopilot in Fatal Model S Crash
Safety board’s recommendations could prod automakers to lock out driver-assist features in certain situations
“Tesla allowed the driver to use the system outside of the environment for which it was designed, and the system gave far too much leeway to the driver to divert his attention to something other than driving,” said Robert Sumwalt, the board's chairman. “The result was a collision that should not have happened.”
CR didn't "turn on" Tesla, as that implies some bias against the company as a whole. They altered their opinion of AP for very specific safety reasons. If AP only put its own users at risk that could be acceptable, provided they were given a full briefing on just what it could and couldn't do and then signed their lives away, and were also required to give the same briefing to any of their passengers and get their signatures as well. But that ignores the other road users who have given no such consent to be used as human guinea pigs (and potentially human crash-test dummies) for AP, such as the oncoming semi driver and any vehicles following him when the Model 3 in the Edmunds test darted across a double yellow line on an undivided, undulating highway. The driver corrected it before it could cross the other double yellow into the oncoming lane, but then they were specifically testing AP's capabilities and watching it like a hawk, rather than an owner out for a routine drive whose attention is more likely to wander.
It's unacceptable to use the public for beta tests where the penalty for failure isn't at most a "Blue Screen of Death," but potentially real death, and I can only hope that NHTSA will finally get off their ass and tighten the regs. Or, if Tesla's dumb enough to actually take the Walter Huang case to court instead of quietly settling with the family as they almost certainly did with Joshua Brown's, they'll get their heads handed to them with a large public settlement and all the negative PR that will follow, and have no choice but to change their policy.
It should be changed, regardless, because taking stupid risks like this may retard the development and deployment of AV as a whole, and that really would be a tragedy. What's needed is to proceed with AVs with all deliberate speed
, not push immature tech out early and accept the death and injuries. Some of those will inevitably happen with AVs in any event, and keeping the public on board will be difficult enough despite any decrease in accident rates. Far more restrictive regulations will result than would be the case if Tesla (and any other company so inclined) were to act in a more responsible fashion on their own, as Cadillac and I believe most companies that are introducing various levels of autonomy have done.
As for me, I'll wait for true L4 capability before I'm willing to trust my life to any autonomous driving system, because the 'hand-off' from machine to human in an emergency is the most dangerous event of any semi-autonomous one.