I don’t really find that so surprising or particularly stupid. I was hoping to learn about serious issues with bad logic or reasoning not missing dots on i’s type stuff.
I can’t remember the example but there was another frequent hallucination that people were submitting bug reports that it wasn’t working, so the project looked at it and realized well actually that kinda would make sense and maybe our tool should work like that, and changed the code to work just like the LLM hallucination expected!
Also in general remember human developers hallucinate ALL THE TIME and then realize it or check documentation. So my point is I feel hallucinations are not particularly important or bother me as much as flawed reasoning.
I can’t remember the example but there was another frequent hallucination that people were submitting bug reports that it wasn’t working, so the project looked at it and realized well actually that kinda would make sense and maybe our tool should work like that, and changed the code to work just like the LLM hallucination expected!
Also in general remember human developers hallucinate ALL THE TIME and then realize it or check documentation. So my point is I feel hallucinations are not particularly important or bother me as much as flawed reasoning.