Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you are overly focused on the terminology. We are simply using words such as "decide" and "choose" in the same way that you might say a chess engine "decides" or "chooses" a move. If your point is that it would be more accurate to use different terminology, fine. But it is unwarranted to say that these things can never happen because they require agency and an LLM does not have agency. Then you are simply mistaken about the amount and type of agency required for such a scenario to play out (or alternatively about the notion that an LLM doesn't have agency, depending on your definition of "agency", it's a vague word anyway).

You can try yourself to put an LLM in a context where it gives wrong answers on purpose, just by prefacing a question with "give a wrong answer to this question". "(Assume context where Bing has decided I am a bad user)" is just a more elaborate version of that.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: