An interesting thought experiment: what would an AGI do in a sterile world? I think the depth of understanding that any intelligence develops is significantly bound by its environment. If there is not enough entropy in the environment, I can't help but feel that a deep intelligence will not manifest. This kind of becomes a nested dolls type of problem, because we need to leverage and preserve the inherent entropy of the universe if we want to construct powerful simulators.
As an example, imagine if we wanted to create an AGI that could parse the laws of the universe. We would not be able to construct a perfect simulator because we do not know the laws ourselves. We could probably bootstrap an initial simulator (given what we know about the universe) to get some basic patterns embedded into the system, but in the long run, I think it will be a crutch due to the lack of universal entropy in the system. Instead, in a strange way, the process has to be reversed, that a simulator would have to be created or dreamed up from the "mind" of the AGI after it has collected data from the world (and formed some model of the world).
Could it not instead be more akin to knowledge passing across human generations, where one understanding is passed on and refined to better fit/explain the current reality (or thrown away wholesale for a better model)? Instead of a crutch, it might be a stepping stone. Presumptuous of us that we might know the way, but nonetheless.
>Could it not instead be more akin to knowledge passing across human generations, where one understanding is passed on and refined to better fit/explain the current reality (or thrown away wholesale for a better model)?
I think it is only knowledge passing when the AGI makes its own simulation.
>Instead of a crutch, it might be a stepping stone.
I think it is a way to gain computational leverage over the universe instead of a stepping stone. Whatever grows inside the simulator will never have an understanding that exceeds that of the simulator's maker. But that is perfectly fine if you are only looking to leverage your understanding of the universe, for example to train robots to carry out physical tasks. A robot carrying out basic physical tasks probably doesn't need a simulator that goes down to the atomic level. One day though, the whole loop will be closed, and AGI will pass on a "dream" to create a simulation for other AGI. Maybe we could even call this "language".
Let me be more concise: whatever grows inside the simulation will never know the rules of the simulation better than the simulation's maker. At best, it will know the rules as well as the maker. In the case of AlphaGo and AlphaZero, while they can better grasp the combinatorial explosion of choices based on the rules of the game, they cannot suddenly decide to play a different type of game that is governed by a different set of rules. There are allowed actions and prohibited actions. Its understanding has been shaped by the rules for the game of go. If you make a new simulation for a new type of game, you are merely imposing a new set of rules.
The catch here is that it may be impossible for the creators of simulations to deterministically define the rules of a simulation, especially considering the effect of time.
As an example, let's take the scenario of building a simulator. The simulation needs to have some internal state. This state will need to be stored either using some properties of matter or some kind of signal. The simulation will also need an energy source.
As soon as the stability of matter or the power supply is perturbed, due to reasons like cosmic radiation or the fact that the power source cannot sustain its output, randomness from the creator's "world" will start seeping into the simulation. The interference may affect the internal state and then you may have unpredicted rules in your simulation.
The counterpoint can be that you use error correction algorithms or you insulate the simulation in such a way that interference does not affect it for a reasonable time-frame or in a manner that is very hard to observe for simulated "agents".
But with this in mind, we can imagine some very crafty agents who somehow stumbled upon these weird phenomena. Suddenly we see our agents building complex contraptions to study the emergent phenomena. Who's to say that the interference and thus these phenomena do not contain information about their creator's world? In the end, they could understand more rules than the simulation was programmed with, if that is true.
Maybe in that case you shut down the simulation. Or maybe you observe the simulation to learn more about your own world.
If we gave an AI the ability to play with Turing machines, it could develop an understanding much larger than the universe, encompassing even alternate ones. The trouble, then, would be narrowing its knowledge to this one.
As an example, imagine if we wanted to create an AGI that could parse the laws of the universe. We would not be able to construct a perfect simulator because we do not know the laws ourselves. We could probably bootstrap an initial simulator (given what we know about the universe) to get some basic patterns embedded into the system, but in the long run, I think it will be a crutch due to the lack of universal entropy in the system. Instead, in a strange way, the process has to be reversed, that a simulator would have to be created or dreamed up from the "mind" of the AGI after it has collected data from the world (and formed some model of the world).