It'd do everyone a favour if people stopped regurgitating this. I have had ChatGPT 3.5 ask me to elaborate, and ChatGPT4 does it when there is ambiguity.
The very nature of mathematics is such that we can't determine what is true and what is not, e.g. incompleteness, undecidability.
The very nature of your brain and its construction means that you hallucinate your reality and you can not determine what is [objectively] true. (cf. all of neuroscience)
I'd go as far as to claim that ChatGPT is far more reliable than the average person.