Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
semiquaver
5 months ago
|
parent
|
context
|
favorite
| on:
Reasoning models don't always say what they think
Yep. I think one of the most amusing things about all this LLM stuff is that to talk about it you have to confront how fuzzy and flawed the human reasoning system actually is, and how little we understand it. And yet it manages to do amazing things.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: