Pretty sure the angst is about the AGI killing everyone. What's the connection between not killing people and enslavement? I don't kill people, yet I don't consider myself enslaved. The entire point of worrying about this at all is that a sufficiently smart AI is going to be free to do whatever it wants, so we had better design it so it wants a future where people are still around. Like, the idea is: enslavement, besides being hugely immoral, obviously isn't going to work on this thing, so we'd better figure out how to make it intrinsically good!
This constant ad hom bickering is exactly what's gonna get us killed. Yes most autistic people are arrogant, they also tend to be good at predicting don't look up style scenarios where you need to step outside societal consensus.
> they also tend to be good at predicting don't look up style scenarios where you need to step outside societal consensus.
[citation needed]
Note, relevantly, that “less likely to miss” is not necessarily “good at predicting”, particularly, the key to trust here would be “unlikely to falsely predict” not “less likely to miss”.
How about the don’t-look-up scenario where this irrational paranoia metastasizes into yet more real-world authoritarianism, and it all boils over in a global war?
“Metastasizes”? It starts out (well, in Yudkowsky's case, specifically) as a call for global authoritarianism and unlimited use of force against any opposition.