Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
|
from
login
How to emotionally grasp the risks of AI Safety
(
lesswrong.com
)
3 points
by
joozio
4 hours ago
|
past
|
discuss
You can't imitation-learn how to continual-learn
(
lesswrong.com
)
2 points
by
paulpauper
1 day ago
|
past
|
discuss
A Mirror Test for LLMs
(
lesswrong.com
)
2 points
by
gmays
1 day ago
|
past
|
discuss
I'm Suing Anthropic for Unauthorized Use of My Personality
(
lesswrong.com
)
5 points
by
usrme
2 days ago
|
past
|
2 comments
Why did everything take so long?
(
lesswrong.com
)
2 points
by
jstanley
3 days ago
|
past
|
discuss
The state of AI safety in four fake graphs
(
lesswrong.com
)
3 points
by
allenleee
3 days ago
|
past
|
discuss
Gyre
(
lesswrong.com
)
3 points
by
jstanley
3 days ago
|
past
|
discuss
Less Dead
(
lesswrong.com
)
2 points
by
paulpauper
3 days ago
|
past
|
discuss
Using complex polynomials to approximate arbitrary continuous functions (2025)
(
lesswrong.com
)
1 point
by
measurablefunc
4 days ago
|
past
|
discuss
The Terrarium
(
lesswrong.com
)
1 point
by
johnfn
4 days ago
|
past
|
discuss
AI's capability improvements haven't come from it getting less affordable
(
lesswrong.com
)
3 points
by
gmays
4 days ago
|
past
|
discuss
I am definitely missing the pre-AI writing era
(
lesswrong.com
)
322 points
by
joozio
5 days ago
|
past
|
240 comments
Stanley Milgram wasn't pessimistic enough about human nature?
(
lesswrong.com
)
7 points
by
paulpauper
5 days ago
|
past
|
1 comment
Anthropic Donations: Guesses and Uncertainties
(
lesswrong.com
)
2 points
by
joozio
5 days ago
|
past
|
discuss
Folie à Machine: LLMs and Epistemic Capture
(
lesswrong.com
)
2 points
by
joozio
5 days ago
|
past
|
discuss
Tracking (Expert/Influential) Predictions about AI
(
lesswrong.com
)
3 points
by
joozio
6 days ago
|
past
|
discuss
You can't imitation-learn how to continual-learn
(
lesswrong.com
)
11 points
by
supermdguy
7 days ago
|
past
|
discuss
The Terrarium
(
lesswrong.com
)
2 points
by
cubefox
7 days ago
|
past
|
discuss
A Tom-Inspired Agenda for AI Safety Research
(
lesswrong.com
)
2 points
by
joozio
11 days ago
|
past
|
1 comment
Which types of AI alignment research are most likely to be good for all sentien
(
lesswrong.com
)
3 points
by
joozio
11 days ago
|
past
|
discuss
The Distaff Texts
(
lesswrong.com
)
1 point
by
paulpauper
13 days ago
|
past
|
discuss
The Hot Mess Paper Conflates Three Distinct Failure Modes
(
lesswrong.com
)
2 points
by
joozio
14 days ago
|
past
Broad Timelines
(
lesswrong.com
)
2 points
by
gmays
14 days ago
|
past
Tacit Knowledge Videos on Every Subject
(
lesswrong.com
)
1 point
by
sebg
16 days ago
|
past
LessWrong Policy on LLM Use
(
lesswrong.com
)
10 points
by
xpe
19 days ago
|
past
|
4 comments
Never Go Full Kelly
(
lesswrong.com
)
3 points
by
pinkmuffinere
20 days ago
|
past
|
1 comment
The ~fifth~ fourth postulate of decision theory (On the Independence Axiom)
(
lesswrong.com
)
2 points
by
sieste
20 days ago
|
past
High Grow Market Equilibrium After the Singularity
(
lesswrong.com
)
2 points
by
gmays
21 days ago
|
past
Selectively reducing eval awareness and murder in Gemma 3 27B via steering
(
lesswrong.com
)
3 points
by
gmays
22 days ago
|
past
Gemma Needs Help
(
lesswrong.com
)
38 points
by
pr337h4m
24 days ago
|
past
|
1 comment
More
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: