> The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible.
Mathematically impossible? Really? I wonder how much of this book has already been disproved by the existence of GPT-4. (One of the chapter titles is "Why machines will not master human language". LOL.)
Totally agree, great read. AFAICT the core argument is about how “complex systems” are complex, and machines are less complex than brains because brains have quantum magic. I feel like that’s a MASSIVE oversimplification of complexity theories (based on the very light touches I got in school years ago), kinda like papers about how the presence of “imaginary” numbers proves that there’s a multiverse or other terminology misapplications.
Plus this chapter list is funny:
Part III: The limits and potential of AI
9. Why there will be no machine intelligence
9.1 Brain emulation and machine evolution
9.2 Intentions and drivenness
9.3 Consciousness
9.4 Philosophy of mind, computation and AI
9.5 Objectifying intelligence and theoretical thinking
10. Why machines will not master human language
10.1 Language as a necessary condition for AGI
10.2 Why machine language production always falls short
10.3 AI conversation emulation
10.4 Mathematical models of human conversations
10.5 Why conversation machines are doomed to fail
11. Why machines will not master social interaction
11.1 No AI emulation of social behaviour
11.2 AI and legal norms
11.3 No machine emulation of morality
Of all those, I think _one_ subsection clearly relates to math?
I hope this isn’t too snarky, thanks to original commenter for posting it - I love looking back at stuff like this either way, and this post in particular brought me much joy. Sadly not gonna pick this one up lol
One of the authors is a philosopher (mostly realist, logic and ontologist) so that's a different angle than a math based book that's for sure. And that's also what makes it interesting compared to most other books that went out recently.
GPT is not understanding language we have many examples that show that.
It has some other qualities that make it good at interacting with humans using language, but it is never "understanding".
That's one part of the points they make in the book.
One of the authors made a video recently about exactly your point https://m.youtube.com/watch?v=s6DA4EcsLm8 This may be easier for you to grasp than the book that require careful reading (and understanding) of the introduction part where they define their premises.
GPT-4 can answer test questions, but could it have come up with what Einstein published in 1905 if it had all the information about the world up to say, 1902?
Is “can develop theories otherwise unknown to anyone but one of the most accomplished scientists of all time” really a logical bar to have to meet for “an artificial intelligence that could equal… human intelligence”?
It is dense and opinionated but even if you may disagree at the end, it is absolutely worth a read.