> It is simply (physically) not possible for the existing cameras to see the area directly in front of the car.
Think about how a human driver does it, given his/her even worse vantage point. They model what's in front/behind the car from afar and remember what's where as they approach it. There are other signals as well, such as continuation of a kerb, etc.
I think people keep forgetting that Teslas run hundreds of ML prediction tasks all the time. Watch recent AI day and their talks about "occupancy network" to get a sense of the car's ability to:
1. Construct 3D model of its surrounding in real time;
2. Remember occluded sections based on what's it's seen previously.
So how would this work for parking?
A: Add more cameras so there are no dead areas in front of the car
B: build a model in vector space when driving towards a parking spot and assume blind spots don't change. (still sucks)