Hacker News new | past | comments | ask | show | jobs | submit login
Tesla FSD no longer offered for purchase (notateslaapp.com)
91 points by ado__dev 7 months ago | hide | past | favorite | 77 comments



> Previously, customers were offered the option to purchase “Full Self-Driving Capability,”

They were offered a lie. Tesla still hasn't delivered what those customers paid for.

> Let’s look at what else has changed on Tesla’s website on FSD before we dive into the wording changes.

Another recent change on Tesla's website is to remove old blog posts, including a 2016 blog post in which Tesla claimed "as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver":

https://electrek.co/2024/08/24/tesla-deletes-its-blog-post-s...

https://web.archive.org/web/20240709163806/https://www.tesla...

Tesla might now also outsource its AI work to xAI:

https://www.wsj.com/tech/tesla-xai-partnership-elon-musk-30e...

If indeed Tesla is "worth basically zero" without full self-driving:

https://electrek.co/2022/06/15/elon-musk-solving-self-drivin...

Then moving that work to xAI seems like a good way to turn Tesla into a private company without actually purchasing it.


> Another recent change on Tesla's website is to remove old blog posts, including a 2016 blog post in which Tesla claimed …

Perhaps unintended but this is a bit misleading. Tesla changed their blog system and didn’t migrate older posts. My initial reading of your comment was that they selectively removed some older posts which they wanted to hide.


My reading of your post is they changed systems for plausible deniability


Even more since migrating content from one CMS to another is a trivial engineering effort.


Considering they decided it was worth upgrading, yeah. A marginal amount of time to backup, normalize, and import the data isn't a lot.

Not worth keeping the posts? No readership? Why upgrade?

Sus, as the kids say, but plausible.


Some Teslas don't have the sensor hardware and the compute power to do road-safe FSD. This is something Tesla engineers learned the hard way.

Things may change in the future as we make advances in computing and AI, but right now it is not possible.


So it's ok to sell a feature to customers if you only find out later it's not possible?


A more generous interpretation is that GP provides context and logical reasons behind the decision, not excuses for it.


It's Elon Musk after all.

His boldness and thinking he's exceptional is both the reason of his success and failures.


To be fair I believe we all have this basic bias. For example, when I have a streak of failures, I approach another task with caution, even though it may be simple and normally I'd do it easily. And when I have several successes, I get overconfident and commit to hard tasks (and sometimes even complete them, by a combination of experience, luck, and other factors). Musk had some great successes and believes it is his inherent feature. (Or, at least, this is how he appears.)

The good thing is that we, as humanity, benefit from his successes, but have also deal with his unfulfillable promises, not to mention occasional fits.


Not only sensors: in Europe I don’t see how driving in small towns could be possible without communicating with other drivers and pedestrians, and reading custom signs. Maybe add a robotic hand to the list of required hardware.


> Some Teslas don't have the sensor hardware and the compute power to do road-safe FSD.

Come on now. Elon was the one being pigheadedly stubborn about camera vision over LIDAR. There's really no reason to think this was a case where the engineers were insisting on a viable approach that unfortunately proved infeasible over time.

They weren't going to be able to do it with the tech they put in the cars and damn near everyone knew it. We shouldn't think their engineers were uniquely blind to what everyone was saying when it's fairly clear that there was a top down push here.


Even with advances in AI, I don’t think the cameras are good enough. They are prone to getting blinded by the sun due to poor dynamic range, and some of them have no way to clean themselves, and aren’t redundant for certain directions.

No matter how good the AI is, the car is not going to be able to drive if the image is a big white blob of blown out sun or bird poop, and there is no redundant sensor.


A car without radar cannot be trusted to drive itself.

I can move my eyes, put on sunglasses, lower the sun visor. That rinkydink camera is stuck looking straight at the sun. It’s a no go.


Tesla owners agree to binding arbitration when purchasing a vehicle, unless they opt out within a window of time in writing. This means consumers will need to engage their state attorney generals and federal regulators for the fraud perpetrated that Tesla is attempting to scrub with these changes: that Full Self Driving unsupervised/robotaxis are not coming as promised when the FSD package was sold by Tesla and purchased by consumers.


> Tesla owners agree to binding arbitration when purchasing a vehicle

Remember when the law made sense? Me neither.


There are multiple matters being conflated together.

0. Tempering customer expectations from overpromising.

1. Investors' concerns about liabilities of advertising autonomous operation, potentially implicitly implying unsupervised.

2. Culpability of drivers when "supervised" (within visual sight) operation fails in a manner that's not immediately physically controllable. What if summon runs into a person or runs over a pet?

3. Potentially replacing "original FSD" with a lower tier substitute at the same price point, or as a subscription only.

While I might jump to 3, 0 and 1 seem the most likely. 2 is still remains a big question mark.


My guess is that they sold FSD for a lump sum while the tech was experimental.

Now that they see the light at the end of the tunnel, they're planning to transition to a subscription model, and the cars that do have it will slowly fall out of circulation as they age, with many of them not having the right hardware for the real thing anyways.

This is similar to what they did with supercharging.


What software breakthroughs have they made that make you believe that FSD won't be 5 years away for the rest of time?


Not sure - but Waymo and a few Chinese vendors already have self-driving robotaxis in production. That, combined with the AI boom, tells me that building a self-driving car (even if geofenced and/or L3) is on the horizon.

If that's the case, Tesla will probably figure this out as well.


"Traditional" self-driving stacks use a ton of pre-built maps, lidar and other range sensors, and has teams of people keeping those maps registered and up to date.

Tesla's plan is (or has become) to do an end-run around all that, and just train a giant network on camera-only sensor stacks, so that it can navigate without large 3D representations of the environment / city in which it works, without expensive lidar/radar sensor suites, and to skip the "partner" phase that Waymo and others do with particular cities.

This allowed them to bring me, a MN customer, something like lvl 3 autonomy before any other company did. But it might not have the same upper-bound as other, more fine-tuned approaches do, and having ridden in Waymo, Nuro, etc vs my own Tesla, I can tell you the Tesla is wonkier for it. Time will tell.


> something like lvl 3 autonomy before any other company did

I'm quite sure Mercedes-Benz was the first to bring lvl 3 autonomy on the market.

https://arstechnica.com/cars/2023/09/mercedes-benzs-level-3-...

It is also the only carmaker confident enough in the system that it takes full liability over it

> Confidence in Drive Pilot is high within Mercedes-Benz, as the system has been active in Germany for over a year without incident. That confidence is demonstrated by Mercedes’ decision to assume liability for the vehicle while Drive Pilot is in use. That’s a particularly bold move since no other manufacturer offers that kind of assurance.


According to that article Mercedes-Benz's system is exclusively highway driving. Technically level three, but not "full self driving" as most people would understand it, or as Tesla defines the term.


Level 3 autonomous driving is not FSD.


SAE doesn't have a definition for "full self driving", only levels of autonomy. "FSD" is term Tesla came up with to distinguish from their previous level 2 autopilot system which could only do highway driving, whereas "full" level 2 self driving can operate under all normal conditions, including city driving. FSD could theoretically cover levels 2, 3, 4, or 5. Highway-only could be levels 1, 2, 3, or 4. There's a lot of overlap.


Mine was a personal example, not a market analysis :)

I'm quite confident that lvl3 autonomy is becoming widespread, regardless.


End users don't care about the tech details as long as it works - and it does, so Tesla might start sweating about Waymo eating their lunches. Maybe they'll also move towards the mapping approach, which would mean they'd have to have maps constantly updated. That'd mean recurring costs for them.

Besides, I'm pretty sure some degree of mapping is necessary - I know some seriously wonky roads with poor visibility, tons of shoulder lanes, roundabouts, and stop-and-go traffic, where I need to know which lane to get in half a kilometer before the turn comes up.

Most people can't figure it out at the first glance - I usually see a couple trying and failing every day.


A slight correction, in a recent interview Karpathy (ex Tesla AI research / engineer) clarified that Tesla uses additional sensors during training but deployment only uses cameras.


I'm no expert, but anyone can look at the sensors placed on a Waymo compared to a Tesla and likely rightly assume Waymo doesn't have all of that extra stuff just for looks.

I don't see how Tesla is even a serious contender.


I think Tesla assumes that having more and better sensors doesn't really solve the hard problems - just like having more cooling in your computer allows for more performance, but its the die shrinks that get you the real scaling.

It's possible that there exists some error metric inside Tesla that consistently goes down with more training and bigger neural nets in their Vision FSD - whereas switching to LIDAR would reduce that error by a fixed 30%.

They just assume that vision will eventually work out.


While I understand your argument and agree that more sensor means easier path to self-driving, our world is now full of systems that can infer amazing amounts of data from a few sensors. This is the progress of technology.

Apple Watch is probably one of the greatest examples. So many of it's features are inferred via "basic" sensors.

On a different angle, sports refereeing is largely becoming possible due to advances in camera based analysis. We can turn 2d images into a nearly centimeter accurate representation of a playing field in seconds.


> On a different angle, sports refereeing is largely becoming possible due to advances in camera based analysis. We can turn 2d images into a nearly centimeter accurate representation of a playing field in seconds.

These cameras are in a very different and much less dynamic environment than on a road speeding at 100+ km/h while getting splashed on, shat on, dusted on, muddied, stroke by bugs, snowed, etc.


I have a hard time believing that stereoscopic image analysis will ever surpass the efficiency of lidar map analysis of 3d spaces. Given how hard self driving is, it would make sense to make the ugliest, sensor-packed vehicle a working model, and then miniaturize /prune from there.

Starting with "basic" sensors is backwards. It is like aspiring to become a chess grandmaster so good you can play with your eyes closed, and starting out as a beginner with your eyes closed.


The hypothesis was that LIDAR was a crutch, humans manage with just vision.

Whether this is correct for delivering self driving cars, we will find out soon enough. Long term though, it definitely makes sense. We just don't know what the missing pieces of the puzzle are.


> humans manage with just vision

this is commonly repeated but very obviously untrue.

We don't only have vision. We have a general intelligence, coupled with vision. In the absence of AGI, the base assumption has to be the sensor apparatus needs to be significantly superior to humans for an FSD system to drive at a comparable level.


Not to mention it is also untrue because we use senses other than just vision when we drive. We use our ears for acceleration information, sometimes hearing, and the feeling of the wheel when we drive.


We don't have anything close to LIDAR though


And a car doesn't have anything close to a human brain.

Humans process sensory data in a fundamentally different way to anything that's possible for a self-driving car. The idea that we should base the decision about the sensors on what humans have just fundamentally makes no sense.

Lidar substitutes hardware for something which humans find easy and CV systems find hard - creating a map of the environment. Humans do that by using a brain. CV systems based purely on video really struggle to do that in lots of edge cases. You can shortcut that in a car by using something like lidar.


You are right.

Would you agree then, that if the goal was to develop AGI, just relying on vision is a credible choice?


No. Why should the design parameters for AGI be limited by what a human can do? If the goal was AGI then I'd want all kinds of additional sensor input that humans don't have.


Once it's a solved problem, yes, it makes sense to think about design parameters.

When learning how to solve problems, that is not as helpful.


> humans manage with just vision.

But they don't. I can't see how anyone could look at modern driving and see an optimal state. Driving isn't being managed at all, it's killing droves of humans.

If we put the same restrictions on airplanes (flying by instrument is a crutch), everyone would rightfully find that ridiculous.

They appear to have bet on the wrong technology. The failure happened back in the design phase.


Aircrafts often don't have vision at all, in regular operation.

If a driver doesn't have vision, the right decision is to figure out how to safely stop.


Human vision has a dynamic range of roughly ~21 stops, plus other differences, do we have any cameras that come close to the human eyes "specs"?



The missing piece may be a different mode of transport like trains. Humans adapted from creatures that lived in trees over millions of years, a computer has nothing on that evolutionary process of the bad tree jumper getting eaten or breaking a leg and dying.

Spend a few million years programming a computer to swing through trees and they'll probably get something that can drive a car.


We have close to that much in training data in the form of cars driven by humans.

What we lack is (still) the fundamental algorithms to learn from video. Tokenization like LLMs or diffusion are starting to fall short of this goal.


The missing piece is LIDAR.


Great, I hope I can someday be this confident about predicting the future!


I have Tesla FSD, it already regularly drives me point to point with zero interventions.


I don't think anyone disputes that Tesla has a very decent ADAS system, what you're describing. As an individual driver though, you don't have the statistical power or access to the design process to see what would be different about an autonomous system without the possibility of local intervention. It brings some very different requirements.


The title here is click bait. The article actually just says they changed the way FSD is worded, it is still offered.


So, Musk did repeat the FSD 'any moment' last earnings call and robotaxis using it this september (now) as well. And then they do this. That is why I found it very strange that anyone believes his recent Mars timelines; people I consider smart (PG for instance [0]) are awe struck by such an announcement while we all know that Elon is often decades off. Don't get me wrong, besides his political gibberish and twitter mess, at least he tries, but I don't know anymore if all the crazy things he says are to manipulate investors or if he really believes them.

[0] https://x.com/paulg/status/1832638755523514701


Remember when tesla was about to release full self driving cars "next year" and uber said they'd buy every single car tesla would produce aha

That was like 10 years ago, and we still don't have anything even remotely close to self driving besides the few $$$ experimental cars driving the long straight roads of ever sunny california


Have you tried it? it's not there but it's close. Even Ford's Blue Cruise is half way there.


Try driving in a small Italian town, on a Swiss mountain roads during a snow storm, &c.

The last 20% are the hardest so if you're only 50% of the way there you haven't even started the hard part


Here you go, FSD in Italy: https://youtu.be/tsE089adyoQ?si=Uo72mxf63DQNn7qG

It generalises quite well even though it’s only trained on US roads AFAIK.


Living in NYC I would say try driving without breaking any laws or road rules and see how far you get. You can't drive half a mile without driving in the other lane to get around something.


the problem is, even if you solve 80% of cases 100% of the time, those 20% of outliers are not easy to get 100%. How should a autonomous car behave when it sees a basketball bounce across the road near a playground? Any human would assume: "there might be kids, I should slow down even if the ball isn't obstructing my path" a car isn't going to be able to make those decisions.


It depends on what you define success. Only Waymo has actual real, live robotaxis right now; Tesla's FSD very much isn't that, despite promises that it'll be here next year for the past 9 years. But in terms of "how much does it suck to drive 2 hours in bumper-to-bumper traffic on the freeway", we've made leaps and bounds of progress since the original Darpa Grand Challenge, 20 years ago in 2004. Yeah I'd love to own a Tesla and rent it out as a robotaxi and have it earn money while I'm not using it. I'd also like a flying car. And a pony. To say that we "don't have anything even remotely close" when we have actually made progress, just because some pie-in-the-sky goal hasn't been meet, is where I take issue with. I don't know how much that last 20% will take, but if we've only made it 80% of the way, that's still not "don't have anything even remotely close" territory.


Like many things in life, "it's close" is not worth celebrating, especially if it's not clear how the gap will be solved.


My point is that, in this case, it is. Totally fine if you disagree, but in terms of how much it sucks to sit in bumper-to-bumper, stop-and-go-traffic, or just any 3 hour drive on the freeway, the SOTA is so much better compared to 20 years ago that it is worth celebrating. Complain about not having robotaxis all you want. What already exists is miles better than OG cruise control from the 90's, which just kept you at a constant speed, and didn't even have radar to automatically slow down if the vehicle in front of you slowed down.


It is further than my cheapo cruise control + lane assist ; on long modern straight roads and in traffic jams, it's almost fsd to me. In other situations it is not functional at all. So these modern ones are better but how much?


> people I consider smart (PG for instance [0]) are awe struck by such an announcement while we all know that Elon is often decades off.

This is the kind of thing that makes me believe that class solidarity is real.


Yeah, paul graham is rich, not smart.


The halo effect in action.


The "or" is not correct IMO in the last part. It is very easy to believe things that just happen to increase your own wealth.

He says crazy things that he believes, it manipulates investors and as his wealth increases he just believes the crazy things more.


He never believed in colonizing Mars (unless he's a moron). His Mars ambition has always just been about snatching the most lucrative inter-governmental contract that anyone has ever conceived of. Everything Musk has been involved with since PayPal has only been possible through federal subsidies and contracts.


> at least he tries

We should hold people accountable for their behavior regardless.


Every manned Mars mission announced by Musk so far has assumed that astronauts will be staying on Mars for good and never return.

Cheeky Reddit discussions arguing how reaching Mars is easier than the moon care more about saving some delta V than saving the lives of the astronauts that would embark on a suicide mission just to appease SpaceX investors.

SpsceX has no plans to build a space station around Mars, meanwhile NASA wants to build a Mars gateway by rehearsing on the moon. SpaceX has not built any hardware or space suits that are necessary for long term survival on the martian surface. Those astronauts would arrive without any means for survival, stuck in their starship just as if they were stuck in a space station. Then there is the fact that at the current rate of progress, the mars schedule is faster than the moon/artemis schedule, implying that they don't care about the moon and are diverting resources away from it even though they have received money and a super tight deadline for it.


>but I don't know anymore if all the crazy things he says are to manipulate investors or if he really believes them.

At what point would it be considered fraud?

IDGAF whatever Musk really believes, the fact of the matter is that if he repeats the "any moment" line on an investor call, he has to be held to it. You can't trot out your own personal beliefs with investor's money, even if you sincerely believe it. He has staff and advisors that can tell him point blank if the tech is really ready, or not ready at all. If those advisors or staff tell him that it's close to being ready, that's straight up fraud unless they can put-up-or-shut-up. If they tell him it's not ready but he repeats the lie, it's also fraud.

How many more excuses will we give this guy? This is absurd.

Disclosure: I founded a startup, and have a few investors. I have to be fully transparent with them, because that's not only the ethical thing to do when you take someone's money, you also have to do so because the law compels you to do so. If I tell them that a product is close to being ready, I have to be ready to prove it. I can't make such claims otherwise. Why are we not holding the wealthiest guy on earth to the same laws EVERYONE ELSE is held to?


Having listened to many interviews by Musk I think he does believe his timelines, in the same way a football coach believes his team can win the Superbowl. They're meant as a goal to be striven towards by himself and those working under him, not as a cold, neutral assessment of the most likely outcome.


Obviously he is wildly optimist About timelines, but you can’t deny SpaceX have done incredible things that literally every expert said was impossible.

I still like his quote “SpaceX makes the impossible merely late”

Fully self driving cars and people on Mars will happen. And I feel pretty certain Tesla and SpaceX will be at least a bit responsible for pushing it to happen faster than if they didn’t exist.

Of course they are not perfect, but on the whole I think we’re better off because they exist.


> Obviously he is wildly optimist About timelines

10 years of lies is not optimism. It's just lies:

https://motherfrunker.ca/fsd/


Then he can refund the cost to every Tesla owner with interest, and let his optimism carry him to great profits once the tech is actually finished. Surely he can do that, if actually believes what he is saying and wasn't just doing an impossible cash grab with tech that would never be capable of doing what he promised.


Title is wrong; they're still offering FSD for purchase, they just added "(Supervised)" to the name of the feature, which doesn't actually change anything since FSD has always required supervision (as the screenshot of the text of the old FSD purchase page clearly shows). This is just a slight change in wording, probably for legal reasons.


how has Tesla got away with this fraud for so many years?

they have never had a thing called "FSD" that existed, but they keep taking money from customers and also people keep dying, for years and years.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: