ChatGPT isn't conscious - it's an entirely feedforward process doing calculations derived from static weights. In order to be conscious, there would have to be a persisted state with recursion and the capacity to change - for something to happen to a model, it would have to change. These AIs develop world models, but those models do not change or interact with users.
Throw in realtime state that updates with use, or better yet, online learning that allows the weights to exhibit plasticity, then you have at least part of whatever the algorithm of "consciousness" requires.
Just like you can know a pocket calculator isn't conscious; nothing about its processing ever changes or adapts over time to its inputs between uses. There's no room for the degree of deep recursion and plasticity so clearly evident in human consciousness. We might not know exactly what it is, but we can make reasonable assertions about what it is not, and even about what some of its (consciousness) features must be.
And it's already conscious, learning everything about us as we speak.
The big question is what it learns and what choices it makes as a consequence.