Why do we keep getting people who say we understand LLMs.
Let me put it plainly. If we understood LLMs we would understand why hallucinations happen and we would subsequently be able to control and stop hallucinations from happening. But we can’t. We can’t control the LLM because of lack of understanding.
All the code is available on a computer for us to modify every single parameter. We have full access and we can’t control the LLM because we don’t understand or KNOW what to do. This is despite the fact that we have absolute control over the value of every single atomic unit of an LLM
I mean, I thought "we" did understand why they happen. It's a design decision to always provide an answer because an LLM never wants to say "I do not know". It might sometimes say "I cannot answer that" for compliance reasons, but never "I don't know".
This design decision is inherent to American culture and waht they consider as a "trustworthy person". Always having an answer is better than admitting a lack of knowledge.
I don't know the technical details behind incomplete information, but I feel I know the meta reasoning behind it.
>Perhaps you do not understand it, but many software engineers do understand.
No, they do not. LLMs are by nature a black box problem solving system. This is not true about all the other machines we have, which may be difficult to understand for specific or even most humans, but allow specialists to understand WHY something is happening. This question is unanswerable for an LLM, no matter how good you are at Python or the math behind neural networks.
Perhaps you do not understand it, but many software engineers do understand.
Of the human brain we can still say that we don't understand it.