I've been having "AI" debates like this for about 10 years now, and I think they usually go in 1 of 2 directions:
1. We don't know what intelligence is
2. AI can never be intelligent because humans are special (in various ways)
Of the two, I think that 1 is the more compelling to talk about. Let's look at state of the art Large Language Models (GPT, BERT, BART, T5, etc.) Everyone claims that they can't be intelligent because they're just cleverly predicting the next tokens. The most common failure mode of this is that they hallucinate - if you ask them to do something for you, they'll get it wrong in a way that kind of makes sense. There are some other more subtle problems as well like common sense reasoning, negation, and factuality. We could say that because of these problems they are not "intelligent". But why is that so important? Can we say with certainty that human intelligence is more than just patterned IO? If it is just highly tuned patterned IO with the environment, perhaps we have discovered intelligent systems, but they're handicapped because they're limited in their sensory perception (read: data modalities). And perhaps by combining several of these models in clever ways, we will end up with an architecture for pattern IO that is indistinguishable from human intelligence.
The naysayers claim that this won't work because we'll still end up with mere pattern prediction machines. But this starts to look like a "humans are special" argument.
Well, it will be interesting to see how this develops in the future. At some point we will have systems powerful enough to process and learn in real time, also using sensors that are equivalent of human senses (or even more powerful). At this point, if we can successfully model and mimic a typical human, why should it matter if it's not a human?
As for the hallucinating point, I remember a funny story. I once tripped on the curb and fell down; my foot ached for a week. My then 4-year-old daughter took her first-aid set for dolls and tried to "cure" my foot. My mother heard the story and found it cute, so she asked my daughter: "Will you cure me like that, too?" My daughter seemed stupefied and answered: "Are you going to trip and fall, grandma?"
My feeling is that the missing links will be found one day and the AI of the future will be able to apply more adult-like "reasoning."
Does it even matter "what intelligence is"? Much like "life" [0], the difficulty seems to be coming from being unable to define it, rather than "finding" it. There are multiple ways it can be defined, based on a bunch of different properties, and each definition delivers different outlooks.
Similar to "life", we use "intelligence" in everyday speech without specifying which definition we mean. I don't think that's going to change – it's just as unproductive to limit "life" to a single definiton (what about viruses? unconsciousness? ecosystems?) as it would be with "intelligence" (pets? ants? being able to converse with a human? showing reasoning? creativity?).
But that also means that the popular term "AI" will never be precise.
As a materialist in matters of the mind, I regard proposition 2 to be an unverifiable belief of those who hold it, but I also regard proposition 1 as being simply a statement of how things currently are: at this point, we do not, in fact, know what intelligence is.
To say that it is "just" highly tuned patterned IO with the environment would be so broad as to be meaningless; all the explanation is being brushed away by that "just", and in the current state of research, no-one has either demonstrated AI or explained intelligence with sufficient specificity for this to be a clearly true synopsis of our knowledge.
You are not quite, however, asserting that to be so, you simply posed the question of whether it is so. In so doing, you are shifting the burden of proof, and proposition 1 stands until someone settles the issue by presenting a testable - and tested - theory of intelligence (note that I wrote of intelligence, not about intelligence; we have plenty of the latter that do not rise to the level of being the former.)
My attitude to the current crop of models is that they demonstrate something interesting about the predictability of everyday human language, but not enough to assume that simply more of the same (or something like it) will amount to AI - we seem to be missing some important parts of the puzzle. If a language model can come up with a response coherently explaining why I am mistaken in so thinking, then I will agree that AI has been achieved.
1. We don't know what intelligence is 2. AI can never be intelligent because humans are special (in various ways)
Of the two, I think that 1 is the more compelling to talk about. Let's look at state of the art Large Language Models (GPT, BERT, BART, T5, etc.) Everyone claims that they can't be intelligent because they're just cleverly predicting the next tokens. The most common failure mode of this is that they hallucinate - if you ask them to do something for you, they'll get it wrong in a way that kind of makes sense. There are some other more subtle problems as well like common sense reasoning, negation, and factuality. We could say that because of these problems they are not "intelligent". But why is that so important? Can we say with certainty that human intelligence is more than just patterned IO? If it is just highly tuned patterned IO with the environment, perhaps we have discovered intelligent systems, but they're handicapped because they're limited in their sensory perception (read: data modalities). And perhaps by combining several of these models in clever ways, we will end up with an architecture for pattern IO that is indistinguishable from human intelligence.
The naysayers claim that this won't work because we'll still end up with mere pattern prediction machines. But this starts to look like a "humans are special" argument.