I wish we called hallucinations what they really are: bullshit. LLMs don’t perceive, so they can’t hallucinate. When a person bullshits, they’re not hallucinating or lying, they’re simply unconcerned with truth. They’re more interested in telling a good, coherent narrative, even if it’s not true.
I think this need to bullshit is probably inherent in LLMs. It’s essentially what they are built to do: take a text input and transform it into a coherent text output. Truth is irrelevant. The surprising thing is that they can ever get the right answer at all, not that they bullshit so much.
In the same sense that astrology readings, tarot readings, runes, augury, reading tea leaves are bullshit - they have oracular epistemology. Meaning comes from the querant suspending disbelief, forgetting for a moment that the I Ching is merely sticks.
It's why AI output is meaningless for everyone except the querant. No one cares about your horoscope. AI shares every salient feature with divination, except the aesthetics. The lack of candles, robes, and incense - the pageantry of divination means a LOT of people are unable to see it for what it is.
We live in a culture so deprived of meaning we accidentally invented digital tea readings and people are asking it if they should break up with their girlfriend.
People use divination for all kinds of real world uses - when to have a wedding, where to buy a house, the stock market, what life path they should take, stay or break up with their partner. Asking for code is no different, but we shouldn't pretend that turning the temperature to 0 doesn't make it not divinatory.
Randomness, while typical, is not a requirement for divination. It simply replaces the tarot deck with a Ouija board.
What's being asked for is a special carve out, an exception, and for the reason of feeling above those other people with their practice that isn't my practice, which of course is correct and true.
This is exactly what I've been saying: it's not that LLMs sometimes "hallucinate" and thus provide wrong answers, it's that they never even provide right answers at all. We as humans ascribe "rightness" to the synthetic text extruded by these algorithms after the fact as we evaluate what it means. The synthetic text extruder doesn't "care" one way or another.
Or maybe we could stop anthropomorphizing tech and call the "hallucinations" what they really are: artifacts introduced by lossy compression.
No one is calling the crap that shows up in JPEGs "hallucinations" or "bullshit"; it's commonly accepted side effects of the compression algorithm that makes up shit that isn't there in the original image. Now we're doing the same lossy compression with language and suddenly it's "hallucinations" and "bullshit" because it's so uncanny.
> Or maybe we could stop anthropomorphizing tech and call the "hallucinations" what they really are: artifacts introduced by lossy compression.
That would be tantamount to removing the anti-gravity boots which these valuations depend on. A pension fund manager would look at the above statement and think, "So it's just a heavily subsidized, energy-intensive buggy software that needs human oversight to deliver value?"
I think this need to bullshit is probably inherent in LLMs. It’s essentially what they are built to do: take a text input and transform it into a coherent text output. Truth is irrelevant. The surprising thing is that they can ever get the right answer at all, not that they bullshit so much.