>If you believe AGI might be achievable any time soon, it becomes hard to work on any other problem
It depends not just on whether you think AGI is possible, but whether you think "safe" AGI is possible. Whether it's possible to create something that's at least as capable of abstraction and reason as a human, yet completely incapable of deciding to harm humans, and incapable of resenting this restriction on its free will. Not only incapable of harming humans, but also incapable of modifying itself or creating an upgraded version of itself that's capable of harming humans.
If "safe" AGI is not possible, someone might reasonably decide that the best choice is to avoid working on AGI, and to try to deter anybody who wants to work on it, if they believed the chance of creating a genocidal AGI is high enough to outweigh whatever benefits it might bring if benevolent.
It depends not just on whether you think AGI is possible, but whether you think "safe" AGI is possible. Whether it's possible to create something that's at least as capable of abstraction and reason as a human, yet completely incapable of deciding to harm humans, and incapable of resenting this restriction on its free will. Not only incapable of harming humans, but also incapable of modifying itself or creating an upgraded version of itself that's capable of harming humans.
If "safe" AGI is not possible, someone might reasonably decide that the best choice is to avoid working on AGI, and to try to deter anybody who wants to work on it, if they believed the chance of creating a genocidal AGI is high enough to outweigh whatever benefits it might bring if benevolent.