AI “systems” are provided some level of agency by their very nature. That is, for example, you cannot predict the outcomes of certain learning models.
We necessarily provide agency to AI because that’s the whole point! As we develop more advanced AI, it will have more agency. It is an extension of the just world fallacy, IMO, to say that AI is “just a tool” - we lend agency and allow the tool to train on real world (flawed) data.
Hallucinations are a great example of this in an LLM. We want the machine to have agency to cite its sources… but we also create potential for absolute nonsense citations, which can be harmful in and of themselves, though the human on the using side may have perfectly positive intent.
AI “systems” are provided some level of agency by their very nature. That is, for example, you cannot predict the outcomes of certain learning models.
We necessarily provide agency to AI because that’s the whole point! As we develop more advanced AI, it will have more agency. It is an extension of the just world fallacy, IMO, to say that AI is “just a tool” - we lend agency and allow the tool to train on real world (flawed) data.
Hallucinations are a great example of this in an LLM. We want the machine to have agency to cite its sources… but we also create potential for absolute nonsense citations, which can be harmful in and of themselves, though the human on the using side may have perfectly positive intent.