Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My intuition is that the difference between GP's analogy and the Chinese room is in computing power of the system, in the sense of Chomsky hierarchy[0] (as opposed to instructions per second).

In the Chinese room, the instructions you're given to manipulate symbols could be Turing-complete programs, and thus capable of processing arbitrary models of reality without you knowing about them. I have no problem accepting the "entire room" as a system understands Chinese.

In contrast, in GP's example, you're learning statistical patterns in Thai corpus. You'll end up building some mental models of your own just to simplify things[1], but I doubt they'll "carve reality at the joints" - you'll overfit the patterns that reflect regularities of Thai society living and going about its business. This may be enough to bluff your way through average conversation (much like ChatGPT does this successfully today), but you'll fail whenever the task requires you to use the kind of computational model your interlocutor uses.

Math and logic - the very tasks ChatGPT fails spectacularly at - are prime examples. Correctly understanding the language requires you to be able to interpret the text like "two plus two equals" as a specific instance of "<number> <binary-operator> <number>"[2], and then execute it using learned abstract rules. This kind of factoring is closer to what we mean by understanding: you don't rely on surface-level token patterns, but match against higher-level concepts and models - Turing-complete programs - and factor the tokens accordingly.

Then again, Chinese room relies on the Chinese-understanding program to be handed to you by some deity, while GP's example talks about building that program organically. The former is useful philosophically, the latter is something we can and do attempt in practice.

To complicate it further, I imagine the person in GP's example could learn the correct higher-level models given enough data, because at the center of it sits a modern, educated human being, capable of generating complex hypotheses[3]. Large Language Models, to my understanding, are not capable of it. They're not designed for it, and I'm not sure if we know a way to approach the problem correctly[4]. LLMs as a class may be Turing-complete, but any particular instance likely isn't.

In the end, it's all getting into fuzzy and uncertain territory for me, because we're hitting the "how the algorithm feels from inside" problem here[5] - the things I consider important to understanding may just be statistical artifacts. And long before LLMs became a thing, I realized that both my internal monologue and the way I talk (and how others seem to speak) is best described as a Markov chain producing strings of thoughts/words that are then quickly evaluated and either discarded or allowed to be grown further.

--

[0] - https://en.wikipedia.org/wiki/Chomsky_hierarchy

[1] - On that note, I have a somewhat strong intuitive belief that learning and compression are fundamentally the same thing.

[2] - I'm simplifying a bit for the sake of example, but then again, generalizing too much won't be helpful, because most people only have procedural understanding of few most common mathematical objects, such as real numbers and addition, instead of a more theoretical understanding of algebra.

[3] - And, of course, exploit the fact that human languages and human societies are very similar to each other.

[4] - Though taking a code-generating LLM and looping it on itself, in order to iteratively self-improve, sounds like a potential starting point. It's effectively genetic programming, but with a twist that your starting point is a large model that already embeds some implicit understanding of reality, by virtue of being trained on text produced by people.

[5] - https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-alg...



> I have no problem accepting the "entire room" as a system understands Chinese.

> you'll fail whenever the task requires you to use the kind of computational model your interlocutor uses.

I think it's important to distinguish between knowing the language and knowing anything about the stuff being discussed in the language. The top level comment all this is under mentioned knowing what a bag is or what popcorn is. These don't require computational complexity, but do require some other data than just text, and a model that can relate multiple kinds of input.


To be clear, transformer networks are turing-complete: https://arxiv.org/abs/2006.09286




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: