A big problem with discourse on AI is people talking past each other because they're not being clear enough on their definitions.
An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.
But how does this agent interact with the outside world? It's just a piece of silicon buzzing with electricity until it outputs a message that some OTHER system reads and interprets.
Maybe that's a set of servos and robotic legs, or maybe it's a Bloomberg terminal and a bank account. You'll notice that all of these things are already regulated if they have enough power to cause damage. So at the end the GP is completely right; someone has to hook up the servos to the first LLM-based terminator.
This whole thing is a huge non-issue. We already (strive to) regulate everything that can cause harm directly. This regulation reaches these fanciful autonomous AI agents as well. If someone bent upon destroying the world had enough resources to build an AI basilisk or whatever, they could have spent 1/10 the effort and just created a thermonuclear bomb.
How does Hitler or Putin or Musk take control? How does a project director build a dam?
Via people, sending messages to them, convincing them to do things. This can be with facts and logic or with rhetoric and emotional appeals or orders that seem to come from entities of importance or transfers of goods/services (money).
An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.