Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>I was pretty amazed when I pasted in some of my medical lab test results and ChatGPT was accurately able to parse everything out and summarize my data

I think an important question is how much faith we give in the answer (especially with medical data!). There are lots of examples of great uses but also a number of examples of hallucinations and just plain bad summaries. When the stakes are high, the conviction needs to be couched in the risk of it being wrong. We need to be cognizant of the automation-trust factor that sometimes makes us place unwarranted trust in these systems.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: