Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not disputing the value of what these tools can do, even though that is often inflated as well. What I'm arguing against is using language that anthropomorphizes them to make them appear far more capable than they really are. That's dishonest at best, and only benefits companies and their shareholders.



> anthropomorphizes them to make them appear far more

It seems like this argument is frequently brought up just because someone used the words "thinking", or "reasoning" or other similar terms, while true that the LLMs aren't really "reasoning" as a human, the terms are used not because the person actually believes that the LLM is "reasoning like a human" but because the concept of "some junk tokens to get better tokens later" has been implemented under that name. And even with that name, it doesn't mean everyone believes they're doing human reasoning.

It's a bit like a "isomorphic" programming frameworks. They're not talking about the mathematical structures which also bears the name "isomorphic", but rather the name been "stolen" to now mean more things, because it was kind of similar in some way.

I'm not sure what the alternative is, humans been doing this thing of "Ah, this new concept X is kind of similar to concept Y, maybe we reuse the name to describe X for now" for a very long time, and if you understand the context when it's brought up, it seems relatively problem-free to me, most people seem to get it.

It benefits everyone in the ecosystem when terms have shared meaning, so discussions about "reasoning" don't have to use terms like "How an AI uses jumbled starting tokens within the <think> tags to get better tokens later", and can instead just say "How an AI uses reasoning" and people can focus on the actual meat instead.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: