Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Read again; I said he’s arguing that the LLM (i.e. thermometer in your example) is the thing that can’t be true or false. Its utterances (the readings of your thermometer) can be.

This would be unlike a human, who can be right or wrong independently of an utterance, because they have a mind and beliefs.



A human can be categorically wrong? Please explain.

And of course a thing in and of itself, be it an apple, a dog or an LLM, can’t be true or false.


I’ll cut to the chase. You’re hung up on the definition of words as opposed to the utility of words.

That classical or quantum mechanics are at all useful depends on the truthfulness of their propositions. If we cared about the process then we let the non-intuitive nature of quantum mechanics enter into the judgement of the usefulness of the science.

The better question to ask is if a tool, be it a book, a thermometer, or an LLM are useful. Error rates affect utility which means that distinctions between correct and incorrect signals are more important than attempts to define arbitrary labels for the tools themselves.

You’re attempting to discount a tool based on everything other than the utility.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: