> If you are only interested in the most superficial tests and theories—like the Turing Test—then consider psychology conquered once you’ve tricked a human with your chat bot.
What's the counterargument? What's a less superficial test that we can use instead, which conclusively shows that actually human minds aren't just like very sophisticated LLMs? There isn't one -- this is nothing but the same Chinese room problem which we've been discussing for decades. The topmost poster is simply assuming that language models can't possibly understand the same way a human does without relying on any kind of "test" at all, which I think is the real scientific dead end here.
> (though in general I think the favored “alignment” frames of the LessWrong community are not even wrong).
The Turing Test doesn’t test humans. So you cannot use it to show any properties about humans.
Next!
> The topmost poster is simply assuming that language models can't possibly understand the same way a human does without relying on any kind of "test" at all, which I think is the real scientific dead end here.
If you are actually interested in this problem why not try interpreting what I'm saying a bit more charitably and not waste your time replying with snark?
What's the counterargument? What's a less superficial test that we can use instead, which conclusively shows that actually human minds aren't just like very sophisticated LLMs? There isn't one -- this is nothing but the same Chinese room problem which we've been discussing for decades. The topmost poster is simply assuming that language models can't possibly understand the same way a human does without relying on any kind of "test" at all, which I think is the real scientific dead end here.