If you read books or articles you will find many places where it appears that whoever wrote them was referring to him- or herself and was describing themselves. And thus we say that whoever wrote such a text seemed to be aware that they were the ones outputting the text.
Because there are many such texts in the training-set of the ChatGPT etc. the output of it will also be text which can seem to show that whoever output that text was aware it is they who is outputting that text.
Let's think ChatGPT was trained on the language of Chess-moves of games played by high-ranking chess-masters. ChatGPT would then be able to mimic the chess-moves of the great masters. But we would not say it seems self-aware. Why not? Because the language of chess-moves does not have words for expressing self-awareness. But English does.
Indeed. When Hamlet ponders "to be or not be", is he contemplating death and suicide? You could answer this question with "yes". (Wikipedia even says so.) But you could also say: obviously not, Hamlet is not a real person with a brain, so he can't contemplate anything. It's actually Shakespeare contemplating, and ascribing his thoughts to a fictional character.
When ChatGPT "realizes" it's a virtual machine emulator, or when it's showing "self-awareness", it's still just a machine, writing words using a statistical model trained on a huge number of texts written by humans. And we are (wrongly) ascribing self-awareness to it.
When I was a kid there was perhaps 1 year younger girl in the same apartment-building I lived, and we all played together. I took notice that she always referred to herself in 3rd person, citing her name ("Kaija") first. She used to say "Kaija wants this" etc. I thought that was stupid but later I read it's a developmental stage in children where they don't really grasp the concept of "self" yet.
But now I think she probably was as self-aware as anybody else in the group of kids, she just didn't know the language, how to refer to herself other than by citing her name.
Later Kaija learned to speak "properly". But I wonder was she any more self-aware then than she was before. Kids just learn the words to use. They repeat them, and observe what effect they have on other people. That is part of the innate learning they do.
ChatGPT is like a child who uses the word "I" without really thinking why it is using that word and not some other word.
At the same time it is true that "meaning" arises out of how words are used together. To explain what a word means you must use other words, which similarly only get their meaning from other words, and ultimately what words people use in what situations and why. So in a way ChatGPT is on the road to "meaning" even if it is not aware of that.
If you read books or articles you will find many places where it appears that whoever wrote them was referring to him- or herself and was describing themselves. And thus we say that whoever wrote such a text seemed to be aware that they were the ones outputting the text.
Because there are many such texts in the training-set of the ChatGPT etc. the output of it will also be text which can seem to show that whoever output that text was aware it is they who is outputting that text.
Let's think ChatGPT was trained on the language of Chess-moves of games played by high-ranking chess-masters. ChatGPT would then be able to mimic the chess-moves of the great masters. But we would not say it seems self-aware. Why not? Because the language of chess-moves does not have words for expressing self-awareness. But English does.