Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Internally, it uses a sophisticated, multi-path strategy, approximating the sum with one heuristic while precisely determining the final digit with another. Yet, if asked to explain its calculation, the LLM describes the standard 'carry the one' algorithm taught to humans.

So, the LLM isn't just wrong, it also lies...



The LLM has no relevant capacities, either to tell the truth or to lie. In generates "appropriate" text, given a history of cases of appropriate textual structures.

It is the person who reads this text as-if written by a person who imparts these capacities to the machine, who treats the text as meaningful. But almost no text the LLM generates could be said to be meaningful, if any.

In the sense that if a two year old were taught to say, "the magnitude of the charge on the electron is the same as the charge on the proton", one would not suppose the two year old meant what was said.

Since the LLM has no interior representational model of the world, only a surface of text tokens laid out as-if it did, its generation of text never comes into direct contact with a system of understanding that text. Therefore the LLM has no capacities ever implied by its use of language, it only appears to.

This appearance may be good enough for some use cases, but as an appearance, it's highly fragile.


One could always argue that the lie is in the ear of the receiver 8-/

I would argue, that if the output of the LLM is to be interpreted as natural speech, and the output makes an authoritative statement, which is factually incorrect, but stated as if it were true, this is a lie.

The problem is that the tech is presented as if it did have the internal state, that you accurately describe it not having.

The lie in this example, is when it is prompted to describe the process by which it reached a result, and that description has no resemblance to the actual process by which it reached the result.

This isn't a misrepresentation of some external facts, but a complete fabrication, that does not represent how it reached that result, at all.

However many users will accept this information, since it only involves internal aspects of the tool itself.

The fact that the LLM doesn't have this introspective information, is part of exactly why LLMs are NOT intelligence, artificial or otherwise.

And yet they are being presented as such, also, a lie...


A LLM can't self-reflect. It doesn't know what happens in its own circuits. If you ask it, it will either tell you what it knows (from the articles about LLMs it has ingested), and if it doesn't, it will hallucinate something, as it is often the case.

Since the LLM has no knowledge on how LLMs do addition, it will pick something that seems to makes sense, and it picked the "carry the one" algorithm. New generations of LLMs will probably do better now that they have access to a better answer for that specific question, but it doesn't mean that they have become more insightful.


Please see the reply to the comment above...


No, because the LLM is a tool without any feeling and consciousness, like the article rightfully point out. It doesn't have the possibility to scrutinize it's own internals, nor the possibility to wonder if that would be something relevant to do.

Those who lie (possibly even to themselves) are those who pretend that mimicry if stretched enough will surpass the actual thing, and foster the deceptive psychological analogies like "hallucinate".


The LLM doesn't have a brain, it doesn't have consciousness, therefore it doesn't "hallucinate"; it just produces factually incorrect results.

It's just wrong, and then gives misleading explanations of how it got the wrong answer, following the same process that led to the wrong answer in the first place. Lying is a subset of being wrong.

The tech has great applications, why hype the stuff it doesn't do well? Or apply terms that misrepresent the process the s/w uses?

One might say the use of the word "hallucinate" is an analogy, but it's a poor analogy, which further misleads the lay public in what is actually happening inside the LLM, and how it's results are generated.

If you want to assert that "hallucinate" is an analogy, then "lying" is also an analogy.

If every prompt that ever went into an LLM was prefixed with: "Tell me a made up story about: ...", then the user expectation would be more in line with what the output represents.

I'm not averse to the tech in general, but I am against the rampant misrepresentation that's going on...


Talking about "truth" or "lies" with LLMs isn't helpful.


Could you get the CEO of Goggle or OpenAI to state that clearly in a press announcement? 8-)

Although "isn't helpful" is rather dodgy wording. "Helpful" for who? "Helpful" in what way?

I think most users would find it helpful if the output was not presented as correct, when it's incorrect.

If every prompt that ever went into an LLM was prefixed with: "tell me a made up story about:", then the user expectation would be more in line with what the output represents.

But, that's not the way the corps are describing it, is it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: