I am not sure that the vision in Teslas is adequate with -any- amount of processing to drive a car. Spatial resolution is limited, as is seeing distant vehicles during merges, etc.
Secondarily, there is no guarantee that the amount of processing is enough, because the extant human systems use much more.
“Cheating” by using more sensors to simplify out complexities and to cover for the shortcomings of other sensors in the suite seems wise.