Consider that AI in any form will somewhat be a reflection of ourselves. As AI becomes more powerful, it essentially will magnify the best and worst of humanity.
So yes, when we consider the dangers of AI, what we actually need to consider is what is the worst we might consider doing to ourselves.
I don't think it can be regulated, except to the extent we ensure state governments and oligopolies retain total control.
AI harm goes far beyond nuclear weapons in so much as it's capacity for harm contains everything for which we place under its control. Based on the potential direction advocates are pushing towards, that includes all of society.
It is just that its capacity for harm will be from harm it already learns from humans, or that humans purposely inject into the system for nefarious reasons, or the simple failure of humans to comprehend potential failures of complex systems.
Consider that AI in any form will somewhat be a reflection of ourselves. As AI becomes more powerful, it essentially will magnify the best and worst of humanity.
So yes, when we consider the dangers of AI, what we actually need to consider is what is the worst we might consider doing to ourselves.