> Only if you redefine "reasoning". This is something that the generative AI industry has succeeded in convincing many people of, but that doesn't mean everyone has to accede to that change.
I agree. However, they can clearly do a reasonable facsimile of many things that we previously believed required reasoning to do acceptably.
Right -- we know that LLMs cannot think, feel, or understand.
Therefore whenever they produce output that looks like the result of those things, we must either be deceived by a reasonable facsimile, or we simply misapprehended their necessity in the first place.
But, do we understand the human brain as well as we understand LLMs?
Obviously there's something different, but is it just a matter of degrees? LLMs have greater memory than humans, and lesser ability to correlate it. Correlation is powerful magic. That's pattern matching though, and I don't see a fundamental reason why LLMs won't get better at it. Maybe never as good as (smart) humans are, but with their superior memory, maybe that will often be adequate.
(Memory Access + Correlation Skills) is a decent proxy for several of the many kinds of human intelligence.
HDDs don't have correlation skills, but LLMs do. They're just not smart-human-level "good", yet.
I am not sure whether I believe AGI will happen. To be meaningful, it would have to be above the level of a smart human.
Building an army of disincorporated average-human-intelligence actors would be economically "productive" though. This is the future I see us trending toward today.
Most humans are not special. This is dystopian, of course. Not in the "machines raise humans for energy" sort of way, but probably no less socially destructive.
Stops exhibiting human intelligence, on at least some of the many axes thereof, yes definitely.
I feel like you're trying to gotcha me into some corner, but I'm not sure you're reading my comments fully. Or perhaps I'm being less clear than I think.
I don't mean to be ungracious, but am I missing something here?
It's not a gotcha, I just don't think you're thinking through the implications of what you're saying when you think only in terms of being able to fake thought with statistics.
I agree. However, they can clearly do a reasonable facsimile of many things that we previously believed required reasoning to do acceptably.