The big huge difference is that cars have this unfortunate thing where if they crash, people get really hurt or killed, especially pedestrians. And split second response time matters, so it's hard for a human operator to just jump in. If ChatGPT-4 hallucinates an answer, it won't kill me. If a human needs to proofread the email it wrote before sending, it'll wait for seconds or minutes.
> If ChatGPT-4 hallucinates an answer, it won't kill me
Sure but look in this thread, there are already plenty of people citing the use of GPT in legal or medical fields. The danger is absolutely real if we march unthinkingly towards an AI-driven future.
Real human doctors kill people by making mistakes. Medical error is a non-trivial cause of deaths. An AI doctor only needs to be better than the average human doctor, isn't that what we always hear about self-driving cars?
And medicine is nothing but pattern matching. Symptoms -> diagnosis -> treatment.