> Imagine an AI that could fundamentally alter its own sensory perception and cognitive framework at will. It could “design” senses that have no human equivalent, enabling it to interface with data and phenomena in entirely novel ways.
Let’s consider data from a global telecommunication network. Humans interface with this data through screens, text, and graphics. We simplify and categorize it, so we can comprehend it. Now imagine that the AI “perceives” this data not as text on screens, but as a direct sensory input, like sight or hearing, but far more intricate and multidimensional.
The AI could develop senses to perceive abstract concepts directly. For instance, it might have a “sense” for the global economy’s state, feeling fluctuations in markets, workforce dynamics, or international trade as immediately and vividly as a human feels the warmth of the sun.
Simultaneously, it can adapt its cognition to process this vast and complex sensory input. It could rearrange its cognitive structures to optimize for different tasks, just as we might switch between different tools for different jobs.
At one moment, it might model its cognition to comprehend and predict the behaviors of billions of individuals based on their online data. The next moment, it might remodel itself to solve complex environmental problems by processing real-time data from every sensor on Earth.
In essence, the AI becomes a cognitive chameleon, continually reshaping its mind to interact with the universe in ways that are most effective and efficient. Its thoughts in these diverse cognitive states would likely be so specialized, so intricately tied to the vast sensory inputs and complex cognitive models it’s employing, that they are essentially impossible to translate into human language.
Yeah, I don’t see a reason for any AI to be able to translate all concepts to human thoughtspace. If an AI is able to have exponentially more possible thoughts than a human, then only a tiny subset would be understood by humans.
It’s be like trying to fit GPT-4 onto a floppy disk.
A floppy disk is a fixed size. The number of thoughts human language can convey is infinite. English Wikipedia has 6.6M articles. The AI could drop a Wikipedia-sized batch of articles, expertly written and cross-referenced, every day, forever. At the same interval they could drop 100 million YouTube videos, expertly authored and hyper-clear.
So yes there might be an infinite amount they cannot convey, but there is also an infinite amount they can convey. I guess it's half-glass-empty test if you are happy about the infinite you get, or are just sad about the infinite you don't get.
That the AI's thinking might be more advanced than ours is not in dispute. What's different about humans->AI compared to cockroaches->humans is language. Imagine we take our library of congress, with 50M books, and create a cockroach version of the library. It's a totally pointless exercise.
Now imagine being an AI and creating a human-readable library with 50M AI-written books for us to read. They could easily do that. And then create 50M more, again and again. And they could read every book we wrote. And forget books, humans and AI could have hundreds of millions of simultaneous real-time video conversations between humans and AI, forever, on any topic.
So being a human in an AI worlds is nothing like being a cockroach in a human world. Sam Harris used the same analogy but said we were ants instead of cockroaches I've heard bacteria also. I think people trot out these bad analogies strictly because it sounds dramatic, and being dramatic seems like good way to get people's attention. Or else they just didn't think it through.
Human language is a Big Big Deal. It's a massive piece of cognitive technology. Any intelligent species with language is in the club and they can communicate with all other intelligent species -- even if those species have very different cognitive capabilities.
> Imagine an AI that could fundamentally alter its own sensory perception and cognitive framework at will. It could “design” senses that have no human equivalent, enabling it to interface with data and phenomena in entirely novel ways.
Let’s consider data from a global telecommunication network. Humans interface with this data through screens, text, and graphics. We simplify and categorize it, so we can comprehend it. Now imagine that the AI “perceives” this data not as text on screens, but as a direct sensory input, like sight or hearing, but far more intricate and multidimensional.
The AI could develop senses to perceive abstract concepts directly. For instance, it might have a “sense” for the global economy’s state, feeling fluctuations in markets, workforce dynamics, or international trade as immediately and vividly as a human feels the warmth of the sun.
Simultaneously, it can adapt its cognition to process this vast and complex sensory input. It could rearrange its cognitive structures to optimize for different tasks, just as we might switch between different tools for different jobs.
At one moment, it might model its cognition to comprehend and predict the behaviors of billions of individuals based on their online data. The next moment, it might remodel itself to solve complex environmental problems by processing real-time data from every sensor on Earth.
In essence, the AI becomes a cognitive chameleon, continually reshaping its mind to interact with the universe in ways that are most effective and efficient. Its thoughts in these diverse cognitive states would likely be so specialized, so intricately tied to the vast sensory inputs and complex cognitive models it’s employing, that they are essentially impossible to translate into human language.