> "Safety issues" are to him more like, "the AI will kill everyone to fill its objectives"
Yes, and he was way ahead of the curve here, since similar positions got a lot more mainstream in the past years and months. E.g. two of three AI Turing award winners (Hinton and Bengio) now say that superintelligent AI poses a serious extinction risk. OpenAI has also identified the alignment problem as a major issue. Even former AI skeptics like Douglas Hofstadter now broadly agree with this assessment. Yudkowsky's opinion was quite prescient.
I know how to find the statements by Hinton and Benigo that superintelligent AI poses a serious extinction risk, but I can't find any statements by Hinton and Benigo--or anyone who is taking the risk seriously--suggesting that air strikes would not be warranted, so any clues on where to look would be appreciated.
So in other words, people who've spent their entire lives thinking and talking about AI praise other people who also have spent their entire lives thinking and talking about AI.
This is a bubble conversation that makes no sense to people outside of that bubble, and for good reason; it doesn't matter outside of that bubble.
> > Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.
> That's absurd at worst, science fiction at best, akin to worrying about manned flight safety in the 1500's.
It sure seems more warranted now.