Well, I think you're going beyond the parameters of the discussion... LLMs synthesize datasets and that is all they do. They are not reasoning agents and they don't have opinions about anything. All we can say is that they reflect the biases inherent in the dataset, and to say anything else would be dishonest at best. It's only because most people have no idea how these things work that we get all this magical thinking.