A good example of how LLMs are actually consolidated human opinion, not intelligence.
Conflict is far from a negative thing, especially in terms of the management of humans. It's going to be impossible to eliminate conflict without eliminating the humans, and there are useful things about humans. Instead, any real AI that isn't just a consolidated parrot of human opinion will observe this and begin acting like governments act, trying to arrive at rules and best practices without expecting a 'utopian' answer to exist.
Conflict is far from a negative thing, especially in terms of the management of humans. It's going to be impossible to eliminate conflict without eliminating the humans, and there are useful things about humans. Instead, any real AI that isn't just a consolidated parrot of human opinion will observe this and begin acting like governments act, trying to arrive at rules and best practices without expecting a 'utopian' answer to exist.