Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Editing mistakes that AI wouldn't make is the new "proof of human input".


I've been messing around with base (not instruction tuned) LLMs; they often evade AI detectors and I wouldn't be surprised if they evade this kind of detection too, at least with a high temperature


> with a high temperature

More like: with the right prompting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: