Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

More data shouldn't actually hurt, it can be use as bayesian prior on the camera data when the vision stuff is uncertain if nothing else.


But that’s the thing. Tesla wasn’t able to tell when the radar was “uncertain”. So when do you trust one data source more than the other? If it was so easy to label misinformation as such, the misinformation would probably not have been communicated in the first place. Tesla explained this at AI day 1.


If the vision stack is just below its certainty threshold that a car is coming across at 63mph at some angle and can't quite decide to take action on its own, and at the same time radar indicates a car is coming across at the predicted angle and speed, it should push it over the threshold, even if it might get false positives from overhead bridge traffic Lining up with the vision estimate makes that less likely, and the vision stack itself can also be used to exclude data that might be from an overhead bridge by detecting that there is no bridge nearby.


No sensor is certain and can be blindly trusted - the data is going into a neural net anyway so the computer can be trained to understand what the most likely ground truth is given all sensor data.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: