Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans hallucinating about AI.


"OpenAI Researcher Hallucinates GPT-5 Math Breakthrough" could be a headline from The Onion.


"OpenAI Researcher Hallucinates GPT-5 Math Breakthrough" could be a headline from The Onion.

Off topic, but I saw The Onion on sale in the magazine rack of Barnes and Noble last month.

For those who miss when it was a free rag in sidewalk newsstands, and don't want to pony up for a full subscription, this is an option.


Seriously those headlines are getting DailyMail sensationalism levels of ridiculous.


I the old world we would just use the word bullshit.


They started believing the very lies they invented.


"The truth is usually just an excuse for a lack of imagination."


Humans "hallucinate" in the AI way constantly, which is why I don't see them as a barrier to LLMs replacing humans in many contexts. It really isn't unusual for a human to make stuff up or be unaware of stuff.


> Humans "hallucinate" in the AI way constantly

This claim is ambiguous. The use of the word "Humans" here obscures rather than clarifies the issue. Individual humans typically do not "hallucinate" constantly, especially not on the job. Any individual human who is as bad at their job as an LLM should indeed be replaced, by a more competent individual human, not by an equally incompetent LLM. This was true long before LLMs were invented.

In the movie "Bill and Ted's Excellent Adventure," the titular characters attempt to write a history report by asking questions of random strangers in a convenience store parking lot. This of course is ridiculous and more a reflection of the extreme laziness of Bill and Ted than anything else. Today, the lazy Bill and Ted would ask ChatGPT instead. It's equally ridiculous to defend the wild inaccuracy and hallucinations of LLMs by comparing them to average humans. It's not the job of humans to answer random questions on any subject.

Human subject matter experts are not perfect, but they’re much better than average and don’t hallucinate on their subjects. They also have accountability and paper trails, can be individually discounted for gross misconduct, unlike LLMs.


A human being informed of a mistake will usually be able to resolve it and learn something in the process, whereas an LLM is more likely to spiral into nonsense


You must know people without egos. Humans are better at correcting their mistakes, but far worse at admitting them.

But yes, as an edge case handler humans still have an edge.


LLMs by contrast love to admit their mistakes and self-flagellate, and then go on to not correct them. Seems like a worse tradeoff.


It's true that the big public-facing chatbots love to admit to mistakes.

It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.


Not when your goal is to create ASI: Artificial Sycophant Intelligence


and this is why LLM is getting cooked

they feed an internet data into that shit, they basically "told" LLM to behave because surprise surprise, human sometimes can be more nasty


You must know better humans than I do.


Do you think the OpenAI human, when informed of their "oopsie" replied "You're right, there is existing evidence that this problem has already been solved. Blah Blah Blah ... and that's why our new model has made a huge breakthrough against previously unsolved math problems!"


Humans are a bit better at knowing which things are important and doing more research. Also better at being honest when directly pressed. And infinitely better at learning from errors.

(Yes, not everyone, but we do have some mechanisms to judge or encourage)


> Humans "hallucinate" in the AI way constantly

This is more and more clearly false. Humans get things wrong certainly, but the manner in which they get things wrong is just not comparable to how the LLMs get things wrong, beyond the most superficial comparison.


it's the same thing with self-driving, if you can make it safer than a good human driver it's enough. but the bar is pretty low with driving (as evidenced by the hundreds of thousands of collisions and deaths and permanent disabilities each year). and rather high in scientific publishing.


Heh stockholders are not hallucinating: They know very well what they are doing.


retail investors? no way. The fever-dream may continue for a while but eventually it will end. Meanwhile we don't even know our full exposure to AI. It's going to be ugly and beyond burying gold in my backyard I can't even figure out how to hedge against this monster.


Yeah no I didn't mean retail investors, OpenAI is not publicly traded, but yeah I do share your concern...


More like humans hallucinating about humans hallucinating about AI, see here: https://news.ycombinator.com/item?id=45634120


No no, openai is actually secretly run by AI.


Best case: Hallucination

Worst case (more probable): Lying


Hanlon's Razor


They are expanding into the adult market because they are running out of ideas. I think common sense is enough to decide what is what here.


Lying is a stupid way of selling something and making money


Lying is a stupid way of selling something and making money

Works for Elon.


These days AI just obsequiously praise whatever stupid ideas the human throw at them, which encourage humans into hallucinating breakthroughs.

But it's only a matter of time before AI gets better at prompt engineering.

/s?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: