Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
Havoc
6 months ago
|
parent
|
context
|
favorite
| on:
DeepSeek-R1
That’s the nature of LLMs. They can’t really think ahead to „know“ whether reasoning is required. So if it’s tuned to spit out reasoning first then that’s what it’ll do
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: