Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
redman25
3 months ago
|
parent
|
context
|
favorite
| on:
Claude 4
They can if they've been post trained on what they know and don't know. The LLM can first been given questions to test its knowledge and if the model returns a wrong answer, it can be given a new training example with an "I don't know" response.
dingnuts
3 months ago
[–]
Oh that's a great idea, just do that for every question the LLM doesn't know the answer to!
That's.. how many questions? Maybe if one model generates all possible questions then
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: