> You don't touch the machine directly, you get the numbers the simulation spits out.
So you're restricting the use of this machine to a situation designed to fool human beings?
The criterion for general intelligence is actual intelligence, not putting it in ideal conditions and seeing if people are fooled.
> Why would the simulation fail this test?
It seems like you think this machine is going to simulate the entire universe, evolve its model of the universe and therefore perfectly predict its next state -- and on the basis of this prediction provide an answer.
Sure, perhaps I will concede: if you are able to simulate the universe in infinite detail you might be able to perfectly predict its next state.
This isnt any ambition of any one however. And has nothing to do with AI. If the precondition of AI is "a perfect simulation of everything", then that's close enough -- for me -- to call it impossible.
Even so, in this sense however, we do not " simulate gold " . What it means to "simulate gold" is to take some small number of aspects, model them with equations, and run those equations.
A video game that allowed you to perform mass-spectrometry on any possible compound, along with everything else you could possibly do to everything -- would I think, be a video game which requires a whole other universe to exist.
And so, it seems your argument is that "when scientists can model the universe in infinite detail so as to perfectly predict its next state, we will have AI!" (and, as far as the quantum state of brains go, it's close enough to inifinity to model all of that)..
OK, sure. I don't know how scientists are going to build a universe simulator without a "theory of everything" and how, even with such a theory, a machine can predict the next-state of a large system in sub-infinite times. Processing merely some particle collisions in the LHC takes months.
I cannot see how a machine is going to actually track the evolving entangled state of an audience of human beings.
"Infinitely precise information about the universe" I think actually requires you to actually be the universe. That's maybe a speculation however, but I would be surprised if the universe could be described in less volume than it occupies. And if an infinity of precision (ie., perfect parity in every simulated result) is actually possible without the target system.
A machine can only simulate what is known. The actual behaviour of the universe is much larger than what is known. As soon as we discover something new, then we have a test to prove the machine is a machine.
> The criterion for general intelligence is actual intelligence, not putting it in ideal conditions and seeing if people are fooled.
Having to do it across a wire is "ideal conditions"?
The initial comment was about "artificial general intelligence". Every single one of those problems can be done across a wire.
Every word that has ever been spoken, every gesture that has ever been made, you can do across a wire.
It's good enough to solve any practical problem in the world. It just won't be "real".
> It seems like you think this machine is going to simulate the entire universe, evolve its model of the universe and therefore perfectly predict its next state -- and on the basis of this prediction provide an answer.
No, it's going to simulate a tiny cubicle with a person inside.
I guess you could call it a simulated universe, but the universe is only two cubic meters.
> I cannot see how a machine is going to actually track the evolving entangled state of an audience of human beings.
Put cameras in the seat in the theater. One copy of the feed goes to a real person, the other goes to the machine. Both can track the evolving state of the audience fine. Neither one should be expected to perfectly simulate the rest of the audience.
> I would be surprised if the universe could be described in less volume than it occupies.
Don't worry, I don't expect the machine to be smaller than two cubic meters!
> A machine can only simulate what is known. The actual behaviour of the universe is much larger than what is known. As soon as we discover something new, then we have a test to prove the machine is a machine.
That's fair. So version 1.0 will have slightly-wrong physics. Do you think that will necessarily make the simulation go awry? Do you think we'll never know enough about physics to simulate a small box with a person in it?
> Do you think we'll never know enough about physics to simulate a small box with a person in it?
In the sense you mean simulation, ie., describe with all required detail some system -- we can barely simulate a few atoms let alone a room with a human being in it.
Im not sure this is even a question of knowing the physics. The problem is that even an atom has an infinite density of "descriptive information"... ie., in order to describe in toto we would be calculating forever.
This is not what any one in AI is even trying to do by the way. This isnt machine learning. This isn't AI.
I'm not convinced simulation in this depth will ever be achieved, I cannot imagine it could ever be performant. Every single causal interaction taking place over a second is an entire universe in itself. To have this second alone described in simulation is a vast undertaking, let alone a conversation.
Maybe I would agree that while this system would be "good enough", if it could predict an appropriate response by simulating a target human being to this depth... all the way down to how dopamine bonds to receptors in the frontal lobe, etc. -- then sure, I could see that it would be close enough.
However this isnt what anyone means when we say something is "simulated". They mean that it a single aspect alone is idealised into a single equation and treated under perfect conditions without any other factor being relevant, and then a calculation involving this equation is run.
People in AI are not even considering animal consciousness as being a relevant thing to simulate (even though that's what consciousness is). They think it is just a matter of some idealized formal structure.
If they realised that it would require an electronic system to calculate every descriptive quantity regarding every particle of some animal, computational general-AI research projects would be binned for the next millennia at least.
In the case of AI, no one is trying to "simulate a human being" in the sense you describe. They are trying to find an extremely simplified highly idealized equation to describe thinking.
They are trying to model intelligence as if the salient features of animal consciousness were not biological but equational. "Good Bye" follows "Hello" because insert program...
No, "Good Bye" follows "Hello" because people who speak english have lived a life of speaking it in which experiences have been acquired in response to the world ie., their brains have developed under sociolinguistic conditions: with light and sound bouncing off their bodies and the bodies of those around them such that their neurological structure as evolved to causally associate "hello" with akind of social circumstane and "goodbye" with likewise.
There is nothing apart from this connected social-neurological system that constitutes why "goodbye" follows "hello". That is how it comes to be. Any rule or system which appeals to an equation that isn't modelling this entire process to its full depth is just "accidentally correlated" with english-- and will be trivially easy to expose.
So you're restricting the use of this machine to a situation designed to fool human beings?
The criterion for general intelligence is actual intelligence, not putting it in ideal conditions and seeing if people are fooled.
> Why would the simulation fail this test?
It seems like you think this machine is going to simulate the entire universe, evolve its model of the universe and therefore perfectly predict its next state -- and on the basis of this prediction provide an answer.
Sure, perhaps I will concede: if you are able to simulate the universe in infinite detail you might be able to perfectly predict its next state.
This isnt any ambition of any one however. And has nothing to do with AI. If the precondition of AI is "a perfect simulation of everything", then that's close enough -- for me -- to call it impossible.
Even so, in this sense however, we do not " simulate gold " . What it means to "simulate gold" is to take some small number of aspects, model them with equations, and run those equations.
A video game that allowed you to perform mass-spectrometry on any possible compound, along with everything else you could possibly do to everything -- would I think, be a video game which requires a whole other universe to exist.
And so, it seems your argument is that "when scientists can model the universe in infinite detail so as to perfectly predict its next state, we will have AI!" (and, as far as the quantum state of brains go, it's close enough to inifinity to model all of that)..
OK, sure. I don't know how scientists are going to build a universe simulator without a "theory of everything" and how, even with such a theory, a machine can predict the next-state of a large system in sub-infinite times. Processing merely some particle collisions in the LHC takes months.
I cannot see how a machine is going to actually track the evolving entangled state of an audience of human beings.
"Infinitely precise information about the universe" I think actually requires you to actually be the universe. That's maybe a speculation however, but I would be surprised if the universe could be described in less volume than it occupies. And if an infinity of precision (ie., perfect parity in every simulated result) is actually possible without the target system.
A machine can only simulate what is known. The actual behaviour of the universe is much larger than what is known. As soon as we discover something new, then we have a test to prove the machine is a machine.