The only way I can see that we can ground this illusion, especially wrt. morality/ethics and values, is by observing that humans all have shared brain architecture, by virtue of evolution and reproduction. We have some shared intuitions in our firmware, which would serve well for the purpose of grounding e.g. morality.
This of course means that we've only delayed the problem until the point when we meet other minds with different architecture - be it aliens, sentient AIs, or whatever. But that's kind of the point the whole AI risk movement has been making - that values and intelligence are orthogonal; there's no reason to assume that an intelligent non-human mind will automatically share values and moral intuitions with us.
This of course means that we've only delayed the problem until the point when we meet other minds with different architecture - be it aliens, sentient AIs, or whatever. But that's kind of the point the whole AI risk movement has been making - that values and intelligence are orthogonal; there's no reason to assume that an intelligent non-human mind will automatically share values and moral intuitions with us.