Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like you could enable FSD for every Tesla car in a "backseat driver" mode and have it mirror actions the driver does (so it doesn't have control but you're running it to see what it would do, without acting on it), and you watch for any significant diversions. Any time FSD wanted to do something but the driver did something else could have been a real disengagement.


They had been doing that, and called it "shadow mode" [1]. I suspect it's no longer being done, perhaps they reached the limit of what they can learn from that sort of training.

[1] https://www.theverge.com/2016/10/19/13341194/tesla-autopilot...


When it's in 'real mode', any disengagement or intervention (ie. using the accelerator pedal without disengaging) is logged to the car and sent to Tesla for some data analysis, and this has been a thing for a while. Of course we don't know just how thorough their data science plays into FSD decision making and what interventions they actually investigate.


I believe this is exactly how Comma trains OpenPilot.


I don't think that would work due to "bad" drivers. We all drive differently than we know we should drive in certain circumstances (e.g. road is completely empty in the middle of rural new mexico)

For example, you can imagine FSD would determine to go straight down a straight lane with no obstacles - that would be the correct behavior. Now imagine in real life the driver takes their hand off the wheel to adjust the radio or AC, and as a result the car drifts over and lightly cross the lane marker - this doesn't really matter because it's broad daylight and the driver can see there's nothing but sand and rocks for 2 miles all around them. What's the machine conclude?


I forget who it was (maybe George Hotz) that said something to the effect of "All bad drivers are bad in different ways, but all good drivers are good in the same way".

The point being made was basically that in the aggregate you can more or less generalize to something like "the tall part of the bell curve is good driving and everything on the tails should be ignored".

Since learning happens in aggregate (individual cars don't learn – they simply feed data back to the mothership), your example of a single car errantly turning the wheel to adjust the radio would fall into the "bad in different ways" bucket and it would be ignored.


"All bad drivers are bad in different ways, but all good drivers are good in the same way".

I accept that as a plausible hypothesis to work off of and see how far it goes, but I would not bank on it as truth.

I'll give another example, I think a significant portion of the time, people roll through stop signs (we'll say, 25% of the time? intuitive guess). I do it myself quite often. This is because not all intersections are built the same - some intersections have no obstacles anywhere near them and you can tell that duh, there's no cars coming up at the same time as me. Other intersections are quite occluded by trees and what not.

I'm fine with humans using judgement on these, but I would not trust whatever the statistical machine ends up generalizing to. I do not think rolling a stop sign makes you a 'bad' driver (depending on the intersection). Still, if I knew I was teaching a machine how to drive, I would not want it to be rolling stop signs.


That sounds like a complicated way to say there are more ways to screw up than do it perfectly, which, duh.

Not to discount this at all, but... yea

Even if the brains of it become perfect, I doubt the vision-only approach (or has that changed?)

They need at least somewhat decently consistent 'signal' to act appropriately... and there are some mornings I just don't drive because visibility is so poor


The theory would be that washes out in the noise. It's a simplification, but on average, most of the people most of the time are not doing that - why would it zone in on the rare case and apply that behavior?


Well zoning in on the rare cases is the difference between what we (as in society's collective technology, not tesla) have today and full reliable self-driving.

Even in the anecdotes throughout the rest of the comment section, there's a lot of people that said "yeah I tried FSD for a limited period of time and it worked for me". Because we're not saying that taking FSD outside right now will kill you within 5 minutes. We're saying that even if it kills you 0.01% - that's pretty sketch.

The general principle is that all of the drivers that have been recruited to be 'teachers' to the machine are not aware that they are training a machine. As a result, they are probably doing things that are not advisable to train on. This doesn't even just apply to machines - how you drive when you are teaching your teenage child is probably different than the things that you do on a regular basis as an experienced driver. If you are not aware that you actually teaching someone something, that's a dangerous set of circumstances.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: