> The article skirts around a central question: what defines humans? Specifically, intelligence and emotions?
> The entire article is saying "it looks kinds like a human in some ways, but people are being fooled!"
> You can't really say that without at least attempting the admittedly very deep question of what an authentic human is.
> To me, it's intelligent because I can't distinguish its output from a person's output, for much of the time.
I think the article does address that rather directly, and that it is also is addressing very specifically your setence about what you can and can't distinguish.
LLMs are not capable of symbolic reasoning[0] and if you understand how they work internally, you will realize they do no reasoning whatsoever.
Humans and many other animals are fully capable of reasoning outside of language (in the former case, prior to language acquisition), and the reduction of "intellgence" to "language" is a catagory error made by people falling vicim to the ELIZA effect[1], not the result of a sum of these particular statistical methods being equal real intelligence of any kind.
Or maybe, can say, an LLM can do symbolic reasoning, but can it do it very well? People forget that humans are also not great at symbolic reasoning. Humans also use a lot of cludgy hacks to do it, it isn't really that natural.
Example often used, about it not doing math well. But humans also don't do math well. How humans are taught to do division and multiplication, really is a little algorithm. So what would be difference between human following algorithm to do a multiplication, and an LLM calling some python to do it. Does that mean it can't symbolically reason about numbers? Or that humans also can't?
> the reduction of "intellgence" to "language" is a catagory error made by people falling vicim to the ELIZA effect[1], not the result of a sum of these particular statistical methods being equal real intelligence of any kind.
I sometimes wonder how many of the people most easily impressed with LLM outputs have actually seen or used ELIZA or similar systems.
> The entire article is saying "it looks kinds like a human in some ways, but people are being fooled!"
> You can't really say that without at least attempting the admittedly very deep question of what an authentic human is.
> To me, it's intelligent because I can't distinguish its output from a person's output, for much of the time.
I think the article does address that rather directly, and that it is also is addressing very specifically your setence about what you can and can't distinguish.
LLMs are not capable of symbolic reasoning[0] and if you understand how they work internally, you will realize they do no reasoning whatsoever.
Humans and many other animals are fully capable of reasoning outside of language (in the former case, prior to language acquisition), and the reduction of "intellgence" to "language" is a catagory error made by people falling vicim to the ELIZA effect[1], not the result of a sum of these particular statistical methods being equal real intelligence of any kind.
0: https://arxiv.org/pdf/2410.05229
1: https://en.wikipedia.org/wiki/ELIZA_effect