This is a sidestep imo. People _can_ be held accountable, though they will not always be. Machines add a layer of complexity - money is lost or a life is lost because AI made the call, who bears the burden? Machines _can't_ be held accountable.
I hear you, but I think it becomes less of a sidestep when "they will not always be" is in practice "they basically never are".
And I'm not sure that most interactions even _can_ be held to account. When someone, say, hallucinates the intended meaning of something written ambiguously that has no unambiguous meaning, and I point out the hallucinatedness of their assumed meaning when what was written was definitively ambiguous, we've all just said words. There's no, like, penalty for anyone.
And people do say things ambiguously and other people do hallucinate the supposed meaning literally all the time. If there were any meaningful accountability for hallucinations, it wouldn't happen nearly as often as it does.