I'm not sure the problem of hard solipsism will ever be solved. So, when an AI can effectively say, "yes, I too am conscious" with as much believability as the human sitting next to you, I think we may have no choice but to accept it.
What if the answer "yes, I am conscious" was computed by hand instead of using a computer, (even if the answer takes years and billions of people to compute it) would you still accept that the language model is sentient ?
We're still a bit far from this scientifically, but to the best of my knowledge, there's nothing preventing us from following "by hand" the activation pattern in a human nervous system that would lead to phrasing the same sentence. And I don't see how this has anything to do with consciousness.
Just to clarify,I wasn't implying simulation, but rather something like single-unit recordings[0] of a live human brain as it goes about it. I think that this is the closest to "following" an artificial neutral network, which we also don't know how to "simulate" short of running the whole thing.