Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Alignment is hard: "If the LLM has finite probability of exhibiting negative behavior, there exists a prompt for which the LLM will exhibit negative behavior with probability 1." Source: Fundamental Limitations of Alignment in LLMs https://arxiv.org/abs/2304.11082


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: