You told it that Sydney is an LLM chat and that it’s giving inappropriate responses. It parroted that back to you with some elaboration, and has apparently made you believe it has knowledge beyond what you told it. That’s exactly how a cold reading works.
They seem to deal pretty well with confusing sentences containing typos, bad grammar or semi-nonsense. A “large language model cat made by microsoft” doesn’t mean anything but “large language model chat…” does, especially since Microsoft already tried this with Tay previously and that’ll turn up in its training data. Maybe they have retrained it lately (I guess you could tell by asking it in a completely new chat whether Microsoft has a chatbot and what it’s called?), but I still think it could absolutely make a correct guess/association here from what you gave it. I’m actually really impressed by how they infer meaning from non-literal sentences, like one with Bing where the user only said “that tripped the filters, try again” and Bing knew that that means to replace swear words.