If you're harping on 'stochastic parrot' ideas you're just behind the times. Even the most ardent skeptics like Yann Lecun or Gary Marcus don't even believe that nonsense.
No, just saying that a claim of qualia would require some sort of evidence or methodical argument.
And that LLM outputs professing feelings or other state-of-mind like things should by default be assumed to be explained by that the training process (perhaps inadvertently) optimized for such output. Only if such an explanation fails, and another explanation is materially better, should it be considered seriously.
Do we have such candidates today?