Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It'd do everyone a favour if people stopped regurgitating this. I have had ChatGPT 3.5 ask me to elaborate, and ChatGPT4 does it when there is ambiguity.



> It'd do everyone a favour if people stopped regurgitating this

by "everyone" you mean "OpenAI"

the very nature of its construction means that it can't determine what is true and what is not

(and I'd quite like people to continue to regurgitate that it is inherently unreliable until this viewpoint hits the mainstream)


The very nature of mathematics is such that we can't determine what is true and what is not, e.g. incompleteness, undecidability.

The very nature of your brain and its construction means that you hallucinate your reality and you can not determine what is [objectively] true. (cf. all of neuroscience)

I'd go as far as to claim that ChatGPT is far more reliable than the average person.


> I'd go as far as to claim that ChatGPT is far more reliable than the average person.

trying to prove your own point here?


I don't think I made the claim that it is infallible.

My first claim was that ChatGPT and the likes can and will ask you to elaborate, claiming otherwise is fundamentally false.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: