So people killing people is ok, but software killing people is way out? I have seen plenty of human-made accidents that were very readily avoidable - one that springs to mind is a colleague who killed himself and his three passengers by driving down the wrong side of the M1 at 180 mph while absolutely slaughtered (pun intended).
I mean, you’re saying that he couldn’t have handled that any better?
How would doing something irresponsible be different with self-driving? He would have just overridden the car’s control and still driven with 180 mph, or if some another poor soul was also caught up in the accident, there is no way a self-driving car could have done anything with a car suddenly appearing in some other lane with that speed avoiding a collision.
I mean, you’re saying that he couldn’t have handled that any better?