Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tesla was supposed to have what they needed when they released the Model 3. Then they had to upgrade the cameras and CPU which meant they had to re train. Then they re-wrote, so again retrain. Now it's new cameras and compute again. Cycle repeats.


How over-fitted are their models to the cameras? I'd expect a layered architecture where a sensor layer does object-recognition and classification and then hands over this representation of the world to a higher-level planning model. You should have to retrain the whole stack for camera revisions - hell that's how it would work across car models with their different camera angles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: