It is my understanding that LLMs have no such thing, as empiric truth is weighted. For example, if Newton's laws are in conflict with another fact, the LLM will defer to the fact that it finds more probable in context. It will then require human resources to undo and unfold it's core error, else you receive bewildering and untrue remarks or outputs.
> For example, if Newton's laws are in conflict with another fact, the LLM will defer to the fact that it finds more probable in context.
Which is the correct thing to do. If such a context would be, for example, an explanation of an FTL drive in a science fiction story, both LLMs and humans would be correct to put Newton aside.
LLMs aren't Markov chains, they don't output naive word frequency based predictions. They build a high-dimensional statistical representation of entirety of their training data, from which then completions are sampled. We already know this representation is able to identify and encode ideas as diverse as "odd/even" and "fun" and "formal language" and "golden gate bridge"-like. "Fictional context" vs. "Real-life physics" is a concept they can distinguish too, just like people can.
Where "probable" means: occurs the most often in the training data (approx the entire Internet).
So what is common online is most likely to win out, not some other notion of correctness.
It is my understanding that LLMs have no such thing, as empiric truth is weighted. For example, if Newton's laws are in conflict with another fact, the LLM will defer to the fact that it finds more probable in context. It will then require human resources to undo and unfold it's core error, else you receive bewildering and untrue remarks or outputs.