FSD is like ChatGPT, it works in many cases, it does some mistakes, but it is certainly not “useless”. It won’t replace full time humans yet (the same way that ChatGPT does not replace a developer) but can still work in some scenarios.
To the investor, ChatGPT is sold as “AGI is just round the corner”.
But "works in limited cases" is absolutely not enough, given what it promises. It drove into static objects a couple of times, killing people. Recent videos still show behavior like speeding through stop signs: https://www.youtube.com/watch?v=MGOo06xzCeU&t=990s
Meaning that it's really not reliable enough to take your hands off the wheel.
Waymo shows that it is possible, with today's technology, to do much much better.
It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.
What they do claim is that with human supervision, it lowers the accident rate to one per 5.5 million miles, which is a lot better than the overall accident rate for all cars on the road. And unlike Waymo, it works everywhere. That's worthwhile even if it never improves from here.
Fwiw you can take your hands off the wheel now, you just have to watch the road. They got rid of the "steering wheel nag" with the latest version.
Well the recent NHTSA report [1] shows Tesla intentionally falsified those statistics, so we can assume Tesla-derived statements are intentionally deceptive until proven otherwise.
Tesla only counts pyrotechnic deployments for their own numbers which NHTSA states is only ~18% of all crashes which is derived from publicly available datasets. Tesla chooses to not even account for a literal 5x discrepancy derivable from publicly available data. They make no attempt to account for anything more complex or subtle. No competent member of the field would make errors that basic except to distort the conclusions.
The usage of falsified statistics to aggressively push product to the risk of their customers makes it clear that their numbers should not only be ignored, but assumed to be malicious.
> It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.
"By 2019 it will be financially irresponsible not to own a Tesla, as you will be able to earn $30K a year by utilizing it as a robotaxi as you sleep."
This was always horseshit, and still is:
If each Tesla could earn $30K profit a year just ferrying people around (and we'd assume more, in this scenario, because it could be 24/7), why the hell is Tesla selling them to us versus printing money for themselves?
They do plan to run their own robotaxis. But there are several million Teslas on the road already. They're just leaving money on the table if they don't make them part of the network, and doing so means they have a chance to hit critical mass without a huge upfront capital expenditure.
... and then react in a split second, or what? it's simpler to say goodbyes before the trip.
> They just think they'll get there.
of course. I think too. eventually they'll hire the receptionist from Waymo and he/she will tell them to build a fucking world model that has some object permanence.
The driving into static objects thing is horrible and unacceptable, I agree. As I understand, this occurred because Autopilot works by recognizing specific objects: vehicles, pedestrians, traffic cones - and avoiding those. So if an object isn't one of those things, or isn't recognized as one of those things, and the car thinks it's in a lane, it keeps going.
Yes, it was a stupid system and you are right to criticize it. And as a Tesla driver in a country that still only has that same Autopilot system and not FSD, I'm very aware of it.
But the current FSD is rebuilt from the ground up to be end-to-end neural, and they have the occupancy network now (which is damn impressive) giving a 3d map of occupied space, which should stop that problem occurring.
Oct 2014: "Five or six years from now we will be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination."
At this point, I'd be surprised if ChatGPT has not yet given someone a response which caused them to make a mistake that resulted in a death.
We found out about the lawyers citing ChatGPT because they were called out by a judge. We find out about Google Maps errors when someone drives off a broken bridge.
For other LLMs we see mistakes bold enough that everyone can recognise them — the headlines about Google's LLM suggesting eating rocks and putting glue on your pizza (at least it said "non-toxic glue").
All it takes is some subtle mistake. The strength and the weakness of the best LLMs is their domain knowledge is part way between a normal person and a domain expert — good enough to receive trust, not enough to deserve it.
Or produces code that compiles but is subtly wrong it probably won't kill someone, well until we start developing safety critical systems with it.
One day we might have only developers that can't actually write code fluently and we'll expect them massage what ever LLMs produce into something workable. Oh well.
FSD is like ChatGPT, it works in many cases, it does some mistakes, but it is certainly not “useless”. It won’t replace full time humans yet (the same way that ChatGPT does not replace a developer) but can still work in some scenarios.
To the investor, ChatGPT is sold as “AGI is just round the corner”.