Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I was at GM Cruise we were using a "semantic map" - the robot cars would drive around the city trying to figure out where they were based on GPS, and then match up what their LiDAR/RADAR/Camera data showed after going through the "Ground Truth" system.

The software just did whatever the ML model figured was the optimal response to the current situation, 10x per second. Often it got the wrong answer, and the NN would be focused on fixing those "wrong answer scenarios" next.

The cars can't drive anywhere the semantic map doesn't already cover.

Ridiculous, and so very disappointing.

We need better methods - maybe something that could generate metaphors like Lakoff suggests in "Metaphors We Live By" but the whole "drive robot cars around a city a million times and make a huge model of it" strikes me as very inefficient.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: