Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This sucks for parking. It is simply (physically) not possible for the existing cameras to see the area directly in front of the car.

So how would this work for parking?

A: Add more cameras so there are no dead areas in front of the car

B: build a model in vector space when driving towards a parking spot and assume blind spots don't change. (still sucks)



> It is simply (physically) not possible for the existing cameras to see the area directly in front of the car.

Think about how a human driver does it, given his/her even worse vantage point. They model what's in front/behind the car from afar and remember what's where as they approach it. There are other signals as well, such as continuation of a kerb, etc.

I think people keep forgetting that Teslas run hundreds of ML prediction tasks all the time. Watch recent AI day and their talks about "occupancy network" to get a sense of the car's ability to:

1. Construct 3D model of its surrounding in real time; 2. Remember occluded sections based on what's it's seen previously.


Human driver constantly turns head around to where he is Mos likely to hit something.


Well, the car has 360 degree camera view, with far wider coverage than a turning head in a driver seat.

And more importantly, it sees in all directions at all times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: