> to get some idea of how LLMs could appear to be 95% of the way there but still be structurally wrong to solve real-life problems that involve logic, arithmetic and everything else.
I'm not sure if your Godel Escher Bach reference is real, but Douglass Hofstadter himself has recanted some of his views in light of LLMs.
The things you're complaining about are implementation details.
> People who know better act insulted when I remind them that neural networks don't repeal the laws of computer science (e.g. Godel, Turing and all that.)
There is absolutely no evidence that humans do either. Not that I don't believe they do, because I -- due to religious reasons -- think we do. But from a purely empirical perspective, I don't see how you can make this claim at all.
LLMs are auto regressive, meaning, to generate multi-word output, you take the old output, feed it in as input and get the next word. This loop provides all the basis for recursive thinking.
Perhaps you can say that current context-limited approaches limit the total memory of such a thing. Obviously that is true, but not functionally different than brains or computers.
I'm not sure if your Godel Escher Bach reference is real, but Douglass Hofstadter himself has recanted some of his views in light of LLMs.
The things you're complaining about are implementation details.
> People who know better act insulted when I remind them that neural networks don't repeal the laws of computer science (e.g. Godel, Turing and all that.)
There is absolutely no evidence that humans do either. Not that I don't believe they do, because I -- due to religious reasons -- think we do. But from a purely empirical perspective, I don't see how you can make this claim at all.