Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah exactly this, ChatGPT 4-o very rarely, if ever, hallucinates.


A very easy way to get basically every current AI model to hallucinate:

1. Ask a highly non-trivial research question (in particular from math)

2. Ask the AI for paper and textbook references on the topic

At this point, already many of these references could be hallucinations.

3. If necessary ask the AI where in these papers/textbooks you can find explanations on the questions, and/or on which aspect of the question or research area the individual references focus.


This backs up what I mentioned in my other comment. My dad, an attorney, purchased both gpt-4o and Gemini Advanced to help write legal documents, which involves citing other legal cases. He says that he's found the legal cases that both models cite to almost always be completely fabricated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: