Hacker News new | past | comments | ask | show | jobs | submit login

Yes, isn’t the whole “alignment” fear basically that if we had smarter than human AGI, we would need smarter than human prompt engineering?



Alignment refers to the process of aligning AI with human values. I don't see why a superhuman AI would require different prompting than is in use today.


The idea is that keeping a superhuman AI aligned would require superhuman prompting. This is the whole premise of OpenAI's SuperAlignment research and recent publication.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: