Hacker News new | past | comments | ask | show | jobs | submit login

An AI doesn't need to think in the same way humans think. It just needs to achieve results (that are better, or at least equal to humans).

The same question has been asked of chess "ai" in the past - that chess ai isn't thinking, it's "just" searching through all possibilities etc. And yet, the result is that no humans can beat chess ais now-a-days.






"The question of whether computers can think is about as interesting as the question whether submarines can swim" - Dijkstra.

That an LLM does not need to think to produce the output we want seems fairly uncontroversial. However, a statement like “LLMs may think, just not in the same way humans think, to produce the output we want” is problematic.

“The same way humans think” is the only kind of “think” that matters, for all intents and purposes. If we cannot define what it specifically is—because it loops us immediately back to the definition of consciousness et al.—the most precise definition of it will have to be along the lines of “the sort of thing that goes on in human minds”.


The question is if it’s worth all the money and resources if it’s not thinking at all so it’s not a way to AGI.

Would you invest so much in a bunch of if-then-else lists?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: