Your line of thinking also presupposes that Bard is self aware about that type of thing. You could also ask it what programming language it's written in, but that doesn't mean it knows and/or will answer you.
This is a common occurance I'm seeing lately. People treating these things as oracles and going straight to chatgpt/bard instead of thinking or researching for themselves
I consider it a standard test because no self-respecting PM would allow the product to ship without being able to market itself correctly. There's a reason the seed prompt says, "You are Bard."
I don't lack awareness of the limitations of pretrained models. I'm evaluating its ability to employ chain of reasoning, in combination with its plugins, to get me an obvious answer.
You're arguing against a point that wasn't being made. I expect an accurate answer using the tools it has available to it. I don't care what details are trained in and which parts are Internet-accessible as long as it gets to the right answer with a user-friendly UX.
The issue is that it failed to employ chain-of-reasoning. It knows who "it" is - its initial seed prompt tells it is Bard. Therefore, asking it, "Are you running Gemini Pro?" should be ~equivalent to "Is Bard running Gemini Pro?" but it interpreted one of those as having such ambiguity it couldn't answer.
Whether it needed to search the Internet or not for the answer is irrelevant.