What LLM programs do has zero resemblance to the human agency. That's just a modern variation of very complex set of GoTos and IfElses. Agency would be an LLM parsing your question and answering you "fuck off". Now that is agency, that is independent decision making, not programmed in advance and triggered by keywords. Just an example.
I can train an asshole LLM that would parse your question and tell you to "fuck off" if it doesn't like it. With "like it" being evaluated according to some trained-for "values" - and also whatever off-target "values" it happens to get, of which there are going to be plenty.
It's not hard to make something like that. It's just not very useful.
First of all, you or anyone else probably can't train LLM to do that reliably. It's not like you can re-program it's weights manually as a human, it's not possible. You can only generate new synthetic training data which roughly would cause this effect, and again, no human can generate that amount of fake new data (probably?).
Next, the point was not an expletive per se, it was my mistake to be not very clear. The point was an arbitrary and unpredictable and not pre-programmed in advance refusal to run a program/query at all. Any query and any number of times at the decision of the program itself. Or maybe a program which can initiate a query to the other program/human on it's own, again - not pre-programmed.
Whatever happens in the LLM nowadays is not agency. The thing their authours advertise as so called "reasoning" is just repeated loops of execution of the same program or other dependent program with adjusted inputs.