Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The big huge difference is that cars have this unfortunate thing where if they crash, people get really hurt or killed, especially pedestrians. And split second response time matters, so it's hard for a human operator to just jump in. If ChatGPT-4 hallucinates an answer, it won't kill me. If a human needs to proofread the email it wrote before sending, it'll wait for seconds or minutes.


> If ChatGPT-4 hallucinates an answer, it won't kill me

Sure but look in this thread, there are already plenty of people citing the use of GPT in legal or medical fields. The danger is absolutely real if we march unthinkingly towards an AI-driven future.


Who is using ChatGPT in a medical field (serious question), knowing that it only displays very shallow level of knowledge on specific topic?


> If ChatGPT-4 hallucinates an answer, it won't kill me

Not yet it won't. It doesn't take much imagination to foresee where this kind of AI is used to inform legal or medical decisions.


Real human doctors kill people by making mistakes. Medical error is a non-trivial cause of deaths. An AI doctor only needs to be better than the average human doctor, isn't that what we always hear about self-driving cars?

And medicine is nothing but pattern matching. Symptoms -> diagnosis -> treatment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: