Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Meanwhile I've been confidently told by GPT-4 that iPad Mini 6 uses the A14 Bionic (then it continued for a while about how that makes it just about as powerful as the 4th generation iPad Air), which is false of course, and the model immediately admits it and apologizes if you correct it. Guess I'll have to learn chess instead.


This behavior is indeed strange and seemingly low hanging fruit. The model often is able to notice that it has given an incorrect answer and correct itself. But why not just give the correct answer to start with? It’s like it needs a second pass, but is there really that much extra information in its first wrong answer?


It has no way of discerning what is factual and what is not. It's fundamental to LLMs. It only corrects itself when prompted by the user, and even then might get it wrong still.


I get this exact scenario very often — model says something blatantly false, gets called out, apologizes and continues. 3.5 or 4, they both do that. And I only discuss stuff with it that I'm at least familiar with, so I can usually catch these mistakes, but it isn't exactly encouraging.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: