You're reasoning from a standpoint that LLMs must have permissions to do everything. That's where you're going awry.
If they don't, they can't.
They don't need to have blanket access to be useful.
And even when sensitive actions need to be exposed, HITL per-sensitive-action authorization ("LLM would like to ____. Approve/deny?") and authorization predicated on non-LLM systems ("Is there an active change request with an open period?"), to toss out a couple trivial examples, are on the table.
Things like this aren't being done now, because initial LLM integrations are lazy and poorly thought out by the dev teams, from a security perspective. (Read: management demanding AI now)
If they don't, they can't.
They don't need to have blanket access to be useful.
And even when sensitive actions need to be exposed, HITL per-sensitive-action authorization ("LLM would like to ____. Approve/deny?") and authorization predicated on non-LLM systems ("Is there an active change request with an open period?"), to toss out a couple trivial examples, are on the table.
Things like this aren't being done now, because initial LLM integrations are lazy and poorly thought out by the dev teams, from a security perspective. (Read: management demanding AI now)