Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
antonvs
3 months ago
|
parent
|
context
|
favorite
| on:
How large are large language models?
> LLM's kind of do their own thing, and the data you get back out of them is correct, incorrect, or dangerously incorrect (i.e. is plausible enough to be taken as correct), with no algorithmic way to discern which is which.
Exactly like information from humans, then?
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
Exactly like information from humans, then?