Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I mentioned elsewhere in this thread, when talking to state-of-the-art LLMs like GPT4, we should remember that they've been fine-tuned to be servile and give answers that are likely to please humans (and with a big helping on censorship on top).

If we are serious about testing their sentience, we should really be talking to the raw models.



But there isn't such a thing as a raw model, is it? In order to receive anything from a language model it has to 'learn' some objective. And this objective has to be imposed from above.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: