Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This paper builds on a series of pathways towards harm. Those are plausible in principle, but we still have frustratingly little evidence of the magnitude of such harms in the field.

To solve the question of whether or not these harms can/will actually materialize, we would need causal attribution, something that is really hard to do — in particular with all involved actors actively monitoring society and reacting to new research.

Personally, I think that transparency measures and tools that help civic society (and researchers) better understand what's going on are the most promising tool here.



There's plenty we can do before any attribution is made.

LLMs hallucinate. They're weak and we can induce that behavior.

We don't do it because of peer pressure. Anyone doing it would sound insane.

It's like a depth charge, to make them surface as non-human.

I think it's doable, specially if they constantly monitor specific groups or people.

There are probably many other methods to draw out evidence without necessarily going all the way into attribution (which we definitely should!).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: