>Could it not instead be more akin to knowledge passing across human generations, where one understanding is passed on and refined to better fit/explain the current reality (or thrown away wholesale for a better model)?
I think it is only knowledge passing when the AGI makes its own simulation.
>Instead of a crutch, it might be a stepping stone.
I think it is a way to gain computational leverage over the universe instead of a stepping stone. Whatever grows inside the simulator will never have an understanding that exceeds that of the simulator's maker. But that is perfectly fine if you are only looking to leverage your understanding of the universe, for example to train robots to carry out physical tasks. A robot carrying out basic physical tasks probably doesn't need a simulator that goes down to the atomic level. One day though, the whole loop will be closed, and AGI will pass on a "dream" to create a simulation for other AGI. Maybe we could even call this "language".
Let me be more concise: whatever grows inside the simulation will never know the rules of the simulation better than the simulation's maker. At best, it will know the rules as well as the maker. In the case of AlphaGo and AlphaZero, while they can better grasp the combinatorial explosion of choices based on the rules of the game, they cannot suddenly decide to play a different type of game that is governed by a different set of rules. There are allowed actions and prohibited actions. Its understanding has been shaped by the rules for the game of go. If you make a new simulation for a new type of game, you are merely imposing a new set of rules.
The catch here is that it may be impossible for the creators of simulations to deterministically define the rules of a simulation, especially considering the effect of time.
As an example, let's take the scenario of building a simulator. The simulation needs to have some internal state. This state will need to be stored either using some properties of matter or some kind of signal. The simulation will also need an energy source.
As soon as the stability of matter or the power supply is perturbed, due to reasons like cosmic radiation or the fact that the power source cannot sustain its output, randomness from the creator's "world" will start seeping into the simulation. The interference may affect the internal state and then you may have unpredicted rules in your simulation.
The counterpoint can be that you use error correction algorithms or you insulate the simulation in such a way that interference does not affect it for a reasonable time-frame or in a manner that is very hard to observe for simulated "agents".
But with this in mind, we can imagine some very crafty agents who somehow stumbled upon these weird phenomena. Suddenly we see our agents building complex contraptions to study the emergent phenomena. Who's to say that the interference and thus these phenomena do not contain information about their creator's world? In the end, they could understand more rules than the simulation was programmed with, if that is true.
Maybe in that case you shut down the simulation. Or maybe you observe the simulation to learn more about your own world.
I think it is only knowledge passing when the AGI makes its own simulation.
>Instead of a crutch, it might be a stepping stone.
I think it is a way to gain computational leverage over the universe instead of a stepping stone. Whatever grows inside the simulator will never have an understanding that exceeds that of the simulator's maker. But that is perfectly fine if you are only looking to leverage your understanding of the universe, for example to train robots to carry out physical tasks. A robot carrying out basic physical tasks probably doesn't need a simulator that goes down to the atomic level. One day though, the whole loop will be closed, and AGI will pass on a "dream" to create a simulation for other AGI. Maybe we could even call this "language".