So go ahead and define it, in concrete terms, external to humans. It can't be equivalent unless there is a definite basis for equivalence. Cat videos don't cut it.
My point is that understanding, as we know it, only exists in the human mind.
If you want to define something that is functionally equivalent but implemented in a machine, that is absolutely fine, but don't point to something a machine does and say "look it's understanding!" without having a concrete model of what understanding is, and how that machine is, in concrete terms, achieving it.
Nope, sorry. You said you have no idea of what understanding is, except that by definition it can only be done by humans.
Fine. Then I posit the existence of understanding-2, which is exactly identical to understanding, whatever it is, except for the fact that it can only be done by machines. And now I ask you to prove to me that AI doesn't have understanding-2.
This is just to show you the absurdity of trying to claim that AI doesn't have understanding because by definition only humans have it.
> Nope, sorry. You said you have no idea of what understanding is, except that by definition it can only be done by humans.
He said understanding is what humans do, not that only humans can do it. Stop arguing against a strawman.
Nobody would define understanding as something only humans can do. But it makes sense to define understanding based on what humans do, since that is our main example of an intelligence. If you want to make another definition of understanding then you need to prove that such a definition doesn't include a lot of behaviors that fails to solve problems human understanding can solve, because then it isn't really the same level of as human understanding.
> He said understanding is what humans do, not that only humans can do it. Stop arguing against a strawman.
Ok, so his argument is:
> Humans understand, models are deterministic functions
> Until you have a concrete definition of what understanding is you can't apply it to anything else
> Informal definitions of understanding by those who experience it aren't very useful at all.
Basically he says: "I don't accept you using the term 'understanding' until you provide a formal definition of it, which none of us has. I don't need such definition when I talk about people, because... I assume that they understand".
Which means: given two agents, I decide that I can apply the word "understanding" only to the human one, for no other reason that it is human, and simply refuse to apply it to non-humans, just because.
Clearly there is absolutely nothing that can convince this person that an AI understands- precisely because it's a machine. Put in front of a computer terminal with a real person on the other side- but being told it's a machine- he would refuse to call "understanding" whatever the human on the other side does. Which makes the entire discussion rather pointless, don't you think?
If we had that then education would be solved, but we still struggle to educate people and ensure fair testing that tests understanding instead of worthless things like effort or memorization.
So go ahead and define it, in concrete terms, external to humans. It can't be equivalent unless there is a definite basis for equivalence. Cat videos don't cut it.
My point is that understanding, as we know it, only exists in the human mind.
If you want to define something that is functionally equivalent but implemented in a machine, that is absolutely fine, but don't point to something a machine does and say "look it's understanding!" without having a concrete model of what understanding is, and how that machine is, in concrete terms, achieving it.