Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hofstadter says humans will be like cockroaches compared to AI. This is an oft-repeated line: sometimes we are ants or bacteria. But I think these comparisons might be totally wrong.

I think it's very possible there's a Intelligence Completeness theorem that's analogous to Turing Completeness. A theorem that says intelligence is in some ways universal, and that our intelligence will be compatible with all other forms of intelligence, even if they are much "smarter".

Cockroaches are not an intelligent species, so they cannot understand our thoughts. But humans are intelligent, human languages have a universal grammar and can be indefinitely extended with new words. I think this puts us in the intelligence species club, and all species in that club can all discuss any idea.

AI might eventually be able to think much quicker than us, to see patterns and make insights better and faster than us. But I don't think makes us cockroaches. I think if they are so smart, they are by definition smart enough to explain us any idea, and with effort we'll be able to understand it and contribute our own thoughts.



Humans have limited "working memory". We manage to cram more into it via hierarchical decomposition into "chunks", a single concept that is more complex inside.

I submit that not everything can be hierarchically decomposed in a way that's useful - i.e. any "abstraction" you try to force on it is more leaky than non-leaky; in that it doesn't simplify its interactions with other chunks. You might say it's the wrong abstraction - but there's no guarantee there is a right abstraction. Some things are just complex. (This is hypothetical, since I don't think we can conceive of any concepts we can't understand.)

An AI could have an arbitrarily large working memory.

Note: I'm talking about intuitive understanding. We could use it mechanically, just never "get it", cowering before icons, being the one in Searle's Chinese Room https://wikipedia.org/wiki/Chinese_room


I suspect the limit of what can be expressed in human language and comprehended by the human mind is vast, but yes, not infinite. I think the AIs will absolutely saturate the connection between them and us, with a non-stop torrent of information which will range from useful to civilization-changing.

And I think this is all very unlike how we are currently impacting the lives of cockroaches with our insights about, well anything. Thus, it's not a good analogy.


> A theorem that says intelligence is in some ways universal, and that our intelligence will be compatible with all other forms of intelligence, even if they are much "smarter".

I would not be so reductionist. Intelligence doesn't seem to be an universal thing, even IQ (a human invented metric) is measured in terms of some statistics. If you have an IQ of ~60 you have intelligence but a completely different one from an IQ >85.

> But humans are intelligent, human languages have a universal grammar and can be indefinitely extended with new words. I think this puts us in the intelligence species club, and all species in that club can all discuss any idea.

Humans have different intelligences. You can be intelligent (per the human intelligence definition) but a math ignorant. Again, this implies intelligence as we know it is not an universal thing at higher levels: not all people can have a physics Ph.D. as not all people could be a good artist where good techniques are recognizable, same for music, etc.

Yes, a cockroach is in another level of intelligence (or non-intelligence) but that does not mean there is not a super-intelligence that makes us relative cockroach.

Also, without any intention of talking about religion or "intelligent design", we can theorize that the Universe is supersmart because it creates intelligent creatures, even if it is not conscious about that. I would be very catious to define intelligence in an universal way.


My point is you cannot teach a cockroach calculus, but if AIs invent a new type of math, they would be able to teach it to us. That's my claim. So the analogy of "we are cockroaches compared to the AI" is wrong, that won't be the case.

Once you have "enough" intelligence to have a complex language, like we do, I'm claiming you are in the club of intelligent species, and all species in that club can communicate ideas with each other. Even if the way they natively think is quite different.


I'm reminded of how one writes programs. I cannot maintain the state of the machine in my head, but I can convince myself of its workings, its intelligence, by reading the code, following along on its line of reasoning as it were. I think the Intelligence Completeness may boil down to the very same Church-Turing thesis.


Yes I agree it might be same thing under the hood. But with intelligence many very smart people seem to fall into using these analogies that diminish humans in a way I don't think is accurate. And I feel that makes people more scared of AI than they need to be, makes AI seem totally alien. [1]

The AIs might spit out entire fields of knowledge, and it might take humans decades of study to understand it all. And no single human might actually understand it all at the same time. But that's how very advanced fields of study already are.

But the "cockroach" slur implies AIs would be in this other stratosphere having endless discussions that we cannot remotely grok. My guess is that won't happen. Because if the AI were to say "I cannot explain this to you" I'd take that as evidence it wasn't all that intelligent after all.

[1] - https://metastable.org/alien.html


AlphaZero invented new moves in the game of Go, but it can't 'teach' them to us, it can only show us the moves and let us figure it out for ourselves (which we're doing). But note that despite this transfer of knowledge, humans didn't rise up to the level of AlphaZero, and they may never be able to. As a sibling comment points out, some things are computationally bound--and humans have a limit to computational ability (can't give ourselves more neurons/connections), whereas AI does not.


I never said we'll mentally rise to the level of AIs. That won't happen. I only said AIs will be able to communicate complex ideas to us, in a way that's totally unlike our ability to communicate complex ideas to cockroaches.

For example, if the AIs could design a complicated building, using our level of technology, they could explain to us how to build that building. And we could build it. Whereas if we come up with a better cockroach-house design, we cannot communicate it to the cockroaches, we simply cannot give them the information. So the AI->us is a very different relationship from us->cockroach.

This doesn't preclude that there might be some things the AI cannot explain to us. Only that there will be many things (infinite in fact) which they can explain to us.


>>but it can't 'teach' them to us

I always thought you could ask GPT to illustrate the steps it took to arrive at the answer. I mean it can take your through the process it went through to arrive at the answer. Its as close you get to an explanation.


> but if AIs invent a new type of math, they would be able to teach it to us

There are already math proofs made by humans on this very day that are hundreds upon hundreds of pages of lemmas that are highly advanced and building on other advanced results. Understanding such a proof is an undertaking that literally takes years. An AI might end up doing it in minutes. But what an AI could cook up in years could take a human... several lifetimes to understand.

As another example, take the design and fabrication of a modern microprocessor. There are so many layers of complexity involved, I would bet that no single person on this planet has all the required knowledge end-to-end needed to manufacture it.

As soon as the complexity of an AI's knowledge reaches a certain point, it essentially becomes unteachable in any reasonable amount of time. Perhaps smaller sub-parts could be distilled and taught, but I think it's naive to assume all knowledge is able to be sliced and diced to human-bite-sized chunks.


I agree "one AI" might produce output that keeps humans busy for decades. But that doesn't make us cockroaches. Cockroaches can't understand language, at all. You can't teach a cockroach calculus if had a trillion years. That's not our position relative to AIs. We will be learning shit-tons from them constantly. I think people who say humans will be "cockroaches" or "ants" or "bacteria" are fear-mongering, or just confused.


GPT-4 takes a shot:

> Imagine an AI that could fundamentally alter its own sensory perception and cognitive framework at will. It could “design” senses that have no human equivalent, enabling it to interface with data and phenomena in entirely novel ways.

Let’s consider data from a global telecommunication network. Humans interface with this data through screens, text, and graphics. We simplify and categorize it, so we can comprehend it. Now imagine that the AI “perceives” this data not as text on screens, but as a direct sensory input, like sight or hearing, but far more intricate and multidimensional.

The AI could develop senses to perceive abstract concepts directly. For instance, it might have a “sense” for the global economy’s state, feeling fluctuations in markets, workforce dynamics, or international trade as immediately and vividly as a human feels the warmth of the sun.

Simultaneously, it can adapt its cognition to process this vast and complex sensory input. It could rearrange its cognitive structures to optimize for different tasks, just as we might switch between different tools for different jobs.

At one moment, it might model its cognition to comprehend and predict the behaviors of billions of individuals based on their online data. The next moment, it might remodel itself to solve complex environmental problems by processing real-time data from every sensor on Earth.

In essence, the AI becomes a cognitive chameleon, continually reshaping its mind to interact with the universe in ways that are most effective and efficient. Its thoughts in these diverse cognitive states would likely be so specialized, so intricately tied to the vast sensory inputs and complex cognitive models it’s employing, that they are essentially impossible to translate into human language.


Yeah, I don’t see a reason for any AI to be able to translate all concepts to human thoughtspace. If an AI is able to have exponentially more possible thoughts than a human, then only a tiny subset would be understood by humans.

It’s be like trying to fit GPT-4 onto a floppy disk.


A floppy disk is a fixed size. The number of thoughts human language can convey is infinite. English Wikipedia has 6.6M articles. The AI could drop a Wikipedia-sized batch of articles, expertly written and cross-referenced, every day, forever. At the same interval they could drop 100 million YouTube videos, expertly authored and hyper-clear.

So yes there might be an infinite amount they cannot convey, but there is also an infinite amount they can convey. I guess it's half-glass-empty test if you are happy about the infinite you get, or are just sad about the infinite you don't get.


There are an infinite number of rational numbers, but if you only understood the rationals then you wouldn’t understand pi or e. Or even sqrt(2).

Infinities can be very constraining.


That the AI's thinking might be more advanced than ours is not in dispute. What's different about humans->AI compared to cockroaches->humans is language. Imagine we take our library of congress, with 50M books, and create a cockroach version of the library. It's a totally pointless exercise.

Now imagine being an AI and creating a human-readable library with 50M AI-written books for us to read. They could easily do that. And then create 50M more, again and again. And they could read every book we wrote. And forget books, humans and AI could have hundreds of millions of simultaneous real-time video conversations between humans and AI, forever, on any topic.

So being a human in an AI worlds is nothing like being a cockroach in a human world. Sam Harris used the same analogy but said we were ants instead of cockroaches I've heard bacteria also. I think people trot out these bad analogies strictly because it sounds dramatic, and being dramatic seems like good way to get people's attention. Or else they just didn't think it through.

Human language is a Big Big Deal. It's a massive piece of cognitive technology. Any intelligent species with language is in the club and they can communicate with all other intelligent species -- even if those species have very different cognitive capabilities.


I think you’re extrapolating from yourself and you think you could learn any field given enough time. What about the type of person with a fixed mindset who thinks they aren’t good at math or chemistry, if ai can’t train them for whatever reason, even if the reason is the person is stubborn and/or willfully ignorant, are they more like a cockroach than a person?

What if someone tries really hard for a long time and can’t learn a field? Do they fail the intelligence test, or does their teacher?


I'm talking about the entire human species, not myself or any one person. I'm saying that humans relating to AIs would not be like cockroaches relating to people. Cockroaches don't have human-level language, but we do, and I'm proposing it is generative and extensible enough to explain any idea. I'm proposing there's non-intelligent species and intelligent species, but there's no intelligent++ species that would look down on us as cockroaches. I'm claiming that won't happen.


>smart enough to explain us any idea

Are dogs, or pigs, or whales, part of the intelligence club? They are clearly intelligent beings with problem-solving skills. We won't be teaching them basic calculus any time soon.


No non-human animals are in the club that's marked by having a language with an infinitely generative syntax and a large (100,000+ words) and always-growing vocabulary.

Intelligence might be a spectrum, but powerful generative language is a step function: you have it or you don't. If you have it, then higher intelligences can communicate complex thoughts to you, if you don't they can't. We have it, so we are in the club, we are not cockroaches.


fair enough. i'm not convinced.

there are many humans who could study mathematics for a lifetime and not be able to comprehend the current best knowledge we possess. i'm one of them. maybe it takes 2 lifetimes. or many more.

a human-level AI operating at machine pace would learn much more than could ever be taught to a human. our powerful generative language capabilities wouldn't matter - it's far beyond our bandwidth. especially so for a superhuman-level AI.


The fact that AIs will have some information that we cannot understand, or will have more information than they can transmit (or we can absorb) does not make us cockroaches.

The AIs will deliver to us truly massive quantities of information, every minute, until the end of time, much of it civilization-changing. Thus the AIs relationship to us will thus be nothing like our relationship to cockroaches, where we essentially cannot tell them anything, not even the time or the day of the week, let alone the contents of Wikipedia.

I think Hofstadter is having an emotional reaction to AI. He says so as much. And it'a a common one, it's the woe is me phase. But I think he's totally wrong about the analogy. I'm 100% sure we will not feel like cockroaches when AI is in full swing, not in the slightest.


Spend some time around a three year old. Human, human intelligence, language skills.

Then try explain quicksort to them. Obvious waste of time.

They wouldn't be much threat in a zero sum strategic interaction either.


If there were a species whose average adult intelligence was that of a human three-year old, then yes you'd be limited in what you could teach them. But as for what AIs can teach humans, you have to assume we are using competent smart adults. My claim is just what AIs can teach competent smart adults is many many orders of magnitude more than what humans can teach competent smart cockroaches. Thus Hofstadter's analogy is not a good one.


Isn't the difference between us and cockroaches just "we can think much quicker, see patterns and make insights better and faster than cockroaches"?


I think the key difference is language. I think human language is above a key threshold. Our syntaxes are infinitely generative and we have large vocabularies (100,000+ words) which are fully extensible. No other animals have that. My claim is AIs will be able to express any complex ideas in our language. But we cannot express our ideas in "cockroach language". So the analogy is not a good one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: