I didn't really have a problem with Tesla or Autopilot's latest issues until I re-read this sentence:
>Autopilot was not operating as designed and as described to users: specifically, as a driver assistance system that maintains a vehicle's position in lane and adjusts the vehicle's speed to match surrounding traffic.
My problem is with Autopilot's branding - it's called AUTOPILOT.
The name isn't "Maintain Lane Position" or "Cruise Distance" or something boring that describes it better - it has AUTO in the name.
Typical drivers aren't airline pilots who complete thousands of hours in flight training and have heavily regulated schedules. We're just people who are busy and looking for solutions to our problems.
If Tesla doesn't want people to think Autopilot functions as crash avoidance/smart vehicle control/better than humans in all situations or blame Tesla for accidents (whether human or machine is at fault) it should come up with a less sexy name.
Isn't the plane's auto-pilot pretty much a pilot assistance system designed to keep the plane at the specified altitude and follow a straight line (heading bug on older systems, GPS coordinates in the flight plan on modern ones)?
It's not designed for collision avoidance, runway taxiing, emergency situations, short/soft field landings or departures. It's occasionally used for normal landings (according to https://www.quora.com/How-often-are-commercial-flights-lande...) but it doesn't seem prevalent.
I think an important difference is in the necessary reaction time. Situations in which a plane flying at altitude under automatic control suddenly requires a sub-second reaction by the pilots are extremely rare, in traffic they happen way more often.
Obstacles don't suddenly pop up in mid-air, and a lot of infrastructure makes sure other traffic is nicely separated at all times.
Thus a plane actually can do most of a flight automatically and it's okay if it has to fall back on not fully attentive humans in edge-cases, because there is some time for error-correction.
The car equivalent might be if highways had lanes separated by walls and cars could detect obstacles a few hundred meters away, then taking the hands of the wheel wouldn't be an issue. On real-world streets, you can't be as hands-off as you could be in a plane at 35,000 feet.
"The Avionic Flight Control System (AFCS) provides manual or automatic modes of control throughout the total flight envelope from take-off to landing."
Lockheed L1011, 1972. Flight trials led to demonstration of a fully automated trans-continental flight, from rest to rest. Pilots did not touch the controls.
Incidentally also the only airliner which was certified for operational Cat IIIC autoland, with zero visibility. Frequently used at London-heathrow but needed a ground-control radar to guide the pilots to the gate once the aircraft had stopped itself on the runway.
Aircraft autopilots are technically capable of completely controlling the flight but are restricted from doing so by technical provision ( e.g. lack of rearward-facing ILS / MLS for departure ) or regulatory caution ( e.g. not executing TCAS collision-resolution automatically, even though every FBW Airbus can do this ).
The technology exists, but there's reluctance to give it too much authority. That's changing, especially in the military.
Full-authority automated ground collision avoidance is now on many F-16 fighters. It's a retrofit, and 440 aircraft had been retrofitted by 2015. First avoided crash was in 2015.[1] Here's a test where the pilot goes into a hard turn and releases the controls, and the automated GCAS, at the last second, flips the plane upright and goes into a steep climb.[2] Because this is for fighters, it doesn't take over until it absolutely has to. The pilot can still fly low and zoom through mountain passes. The first users of this system, the Swedish air force, said "you can't fly any lower". It's a development of Lockheed's Skunk Works.
This technology is supposed to go into the F-35, but it's not in yet.
This may eventually filter down to commercial airliners, but they'd need more capable radars that can profile the ground. This is not GPS and map based; it's looking at the real world.
As I was saying down the thread, I think its fair to say that Tesla use the autopilot naming because it sounds catchy and better than the terms rest of the world use. I think its a conscious marketing decision to make their look better when its probably not. Everyone in the airlines industry use the same term and the people involved are too informed/trained for their to be confusion. Tesla's branding vastly different from the market is meant to confuse by making them sound better. Thats what they should be penalized for.
But, Tesla's system is designed for collision avoidance, automatic braking, etc. There is absolutely a problem that they call it "Autopilot", as it brings to mind the airplane meaning of the term.
But, it matches neither the common use case for airplane autopilot systems nor the common perception--right or wrong--of what those autopilot systems do. So, Tesla enjoys the cachet that comes with that latter perception, while relegating the true description to the "fine print" of the owner's manual.
Are we going to fight the colloquial use at this point? The redux of "hacker" vs "cracker"?
If the public consistently misunderstands the term and their life depends on it, the term has to be changed period. This is no place for semantic nazism.
>>Isn't the plane's auto-pilot pretty much a pilot assistance system designed to keep the plane at the specified altitude and follow a straight line (heading bug on older systems, GPS coordinates in the flight plan on modern ones)?
Your plane autopilot analogy fails in a crucial manner:
First, to get a license to fly a plane, you have to undergo a much more rigorous training and much more stringent scrutiny than what an ordinary Joe/Jane undergoes to get a license to drive the Tesla car.
Another, a plane pilot does not ride (fly) his/her plane as much as our ordinary Joe/Jane drives his/her car.
Yet another, there are assistant pilots in plane
and the list goes on.
The foolish management at Tesla should have labeled their assistance system just what it is 'semi automatic assistance system' and they could have been slightly more prudent by clearly mentioning the "dangerous" components of it upfront, rather than cleaning the shit now.
It's a sad affair. I had/have more hopes from Tesla. But they should abandon their foolish autopilot thingy, to begin with now.
If you are referring to the parent's analogy, then yes, I said his/her analogy fails.
Tesla is not making it mandatory to their buyers go through a serious and rigorous training to use their so tauted auto-pilot, which is a freaking dangerous thing as it's far from being a autopilot and it's only half-baked semi-auto-pilot potentially riddled with a lot of hidden AI bugs, that their machine learning team may find hard to even locate.
The airplane autopilot analogy, the parent is making to justify Tesla's claims fails miserably, IMO, anyway.
But if a lawsuit gets filed, Tesla will have a very hard time justifying this type of claim.
Another important thing (from their business success point of view), is: this incidence and their shameless justification of the faults in their so-tauted autopilot will tarnish (already has tarnished to some extent) their image in the public. They can't just now show their fucking warnings they originally published in fine-print and got the unsuspecting users signed, and expect the users to happily purchase their now-perceived death-traps.
Competitors just have to point this death-trap autopilot feature of Tesla to turn a potential buyer in their favor.
Exactly, this is a failure of marketing, not technology. Somebody in Tesla made a decision to prioritize "building a brand" and "making a sale" over "accurately communicating product limitations to customers". At a certain point it doesn't matter what you put in the manual or in the fine-print, you've got to ensure the customer has the correct mental model about what they are (or aren't) buying.
To clarify, I'm not saying Tesla made a dumb or unforgivable misstep (there will always be a dumber customers) but if they're going to do a (literal) post-mortem, they need to acknowledge that their branding is a factor.
If you have a human-supervised safety-critical automated system (where "difficult" situations are to be overridden by the human) you end up needing the human supervisor to be much more skilled (and often faster-reacting) than they would have needed to be just to do the operation manually.
I like how on Hacker News, Elon Musk gets all the credit when Tesla is awesome, but "Somebody in Tesla made a decision" when people die.
Musk made the call. He might not have proposed it, but given how involved he is with the marketing and PR aspect of Tesla, there's no way he didn't OK the decision to call it Autopilot.
Musk personally made the decision to not use LIDAR and rely to a great extent on single-camera vision.[1] Musk, in 2015: "I don’t think you need LIDAR. I think you can do this all with passive optical and then with maybe one forward RADAR. I think that completely solves it without the use of LIDAR. I’m not a big fan of LIDAR, I don’t think it makes sense in this context."
The "one forward radar" was the decision that killed. Tesla cars are radar blind at windshield height, which is why the car underran a semitrailer from the side. Most automotive radars have terrible resolution and no steering in elevation. There are better radars[2], but they haven't made it down to automotive yet.
>Autopilot was not operating as designed and as described to users: specifically, as a driver assistance system that maintains a vehicle's position in lane and adjusts the vehicle's speed to match surrounding traffic
I agree with your comment regarding their branding choice, and I'd add that the design itself is flawed: specifically, the entire notion of the car taking over specific reaction-based functions, but a.) leaving other such functions to the driver and b.) requiring the driver to supervise and override according to split second situations.
So does autocorrect, but I don't see people complaining that banging on the keyboard doesn't produce sonnets.
This whole "auto" thing is ridiculous.
By the way, it's an automobile and has been for a while ... where's the expectation that it will drive itself? Should we not call them automobiles anymore, in case someone gets the wrong impression? This is silly.
Do you disagree that Tesla is using autopilot and not 'drive assist' or something like that to make the potential think their implementation is so much better than the rest of the world? Or the fact that they don't push so much on keeping the hands on the wheel to make this more comfortable for the users vs. the rest. They are actively trying to exploit the fact people think autopilot is more capable then the rest. Hence you/they can't complain when that perception backfires on them.
> Do you disagree that Tesla is using autopilot and not 'drive assist' or something like that to make the potential think their implementation is so much better than the rest of the world?
Their implementation is much better than the competition.
The "auto" in "autopilot" stands for automatic, not autonomous. Autopilot systems have existed for decades, and they've always referred to systems that automate the most tedious parts of operating a vehicle, while still requiring a human operator to handle new/unexpected situations.
Exactly! People seem to get really caught up and adamant about this label. However, airplane autopilot is arguably significantly dumber than Tesla's autopilot. Yet for some reason people expect more even though Tesla has been clear on how limited the use case is.
Telling is that this letter uses the term "driver assistance system" multiple times. I wonder if that's how they'll market it, or if that phrasing is reserved for PR damage control. (They also use the phrasing, "a death that had taken place in one of its vehicles" as opposed to the common "fatal crash")
It's hard to change the name from "Autopilot" to "Merely assist you in straight line, but please regain control when there's a white truck coming from the right [1], and DO NOT TOUCH THE BRAKE [2] because it disengages the emergency stop".
It's especially hard when you sell the Autopilot for $2500-$3000 [3].
I agree with your post. Having said that, autopilot on airplanes is meant to be a "macro," rather than an autonomous flying function (in that they do not replace a human operator; in fact the pilot programs the instructions into the flight computer). Somehow autopilot's meaning was lost in translation, and people interpreted it to mean they don't have to take control when things go wrong. Perhaps the prefix auto- is what's wrong--people think the car will drive itself.
Automobiles contain the prefix auto- yet nobody assumes the car is self-driving. Most people understand it to mean gears don't need to be changed manually (not applicable to Tesla since it's single gear and gains full torque).
> Automobiles contain the prefix auto- yet nobody assumes the car is self-driving.
Actually, in that regard yes they do -- "auto-mobile" = "moves itself", as in, you don't have to pedal or Fred Flintstone it...
> Most people understand it to mean gears don't need to be changed manually
No, that'd be "automatic gearbox". As in "Do you drive a manual or an automatic?"
So "autopilot" would suggest it does the piloting as well (ie, the stuff the person sitting at the controls -- the common view of a "pilot" -- normally does)
Another problem with the name is that autopilots in planes and ships do exactly what the pilot programmed them to do, e.g. hold a certain course. The Tesla autopilot on the other hand tries to intelligently react to the subset of the environment which it see through it sensors, which makes it more unpredictable. I assume that pilots don't have to monitor the behaviour of the autopilot itself, whereas in a Tesla you have to do that and be ready to interfere any second. That doesn't really reduce the workload for the driver, so people just trust the system instead.
Pilots do have to monitor the behaviour of the autopilot. Even though they are programmed to do one thing (hold a course or altitude, change to a course or an altitude with a certain rate of change), they rely on potentially faulty sensors and control many parameters to meet the goal. Usually the autopilot will simply disconnect if it detects a problem, but there are examples of autopilots causing passenger injuries:
https://en.wikipedia.org/wiki/Qantas_Flight_72
>Autopilot was not operating as designed and as described to users: specifically, as a driver assistance system that maintains a vehicle's position in lane and adjusts the vehicle's speed to match surrounding traffic.
My problem is with Autopilot's branding - it's called AUTOPILOT.
The name isn't "Maintain Lane Position" or "Cruise Distance" or something boring that describes it better - it has AUTO in the name.
Typical drivers aren't airline pilots who complete thousands of hours in flight training and have heavily regulated schedules. We're just people who are busy and looking for solutions to our problems.
If Tesla doesn't want people to think Autopilot functions as crash avoidance/smart vehicle control/better than humans in all situations or blame Tesla for accidents (whether human or machine is at fault) it should come up with a less sexy name.