Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your line of thinking also presupposes that Bard is self aware about that type of thing. You could also ask it what programming language it's written in, but that doesn't mean it knows and/or will answer you.



This is a common occurance I'm seeing lately. People treating these things as oracles and going straight to chatgpt/bard instead of thinking or researching for themselves


I consider it a standard test because no self-respecting PM would allow the product to ship without being able to market itself correctly. There's a reason the seed prompt says, "You are Bard."

I don't lack awareness of the limitations of pretrained models. I'm evaluating its ability to employ chain of reasoning, in combination with its plugins, to get me an obvious answer.


I had the same issue as OP. Initially Bard seemed clueless about Gemini, then:

Me: I see. Google made an announcment today saying that Bard was now using a fine-tuned version of their "Gemini" model

Bard: That's correct! As of December 6, 2023, I am using a fine-tuned version of Google's Gemini model ...


So Bard found the blog post from Google and returned the information in it. No new information was get.

The LLM itself does not KNOW anything.


You're arguing against a point that wasn't being made. I expect an accurate answer using the tools it has available to it. I don't care what details are trained in and which parts are Internet-accessible as long as it gets to the right answer with a user-friendly UX.

The issue is that it failed to employ chain-of-reasoning. It knows who "it" is - its initial seed prompt tells it is Bard. Therefore, asking it, "Are you running Gemini Pro?" should be ~equivalent to "Is Bard running Gemini Pro?" but it interpreted one of those as having such ambiguity it couldn't answer.

Whether it needed to search the Internet or not for the answer is irrelevant.


It has access to the Internet and is free to search for the right answer.

If I ask it who it is, it says it is Bard. It is aware of the launch that occurred today. It cites December 6th.

It just very incorrectly felt that I was asking an ambiguous question until I restate the same question again. It's not great.


It forgets previous prompts and answers. I have to specifically ask it to relate to those and take those into consideration.


Knowing it’s own build information is something that could be trained into the model right? Seems like a good idea.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: