I think discussions of AI safety at this stage -- when we're already having problems with what passes for AI these days that we're not handling well at all -- is a bit silly, but I don't have something particularly intelligent to say on the matter, and neither, it seems, does anyone else, except maybe for this article that shows that the AGI paranoia (as opposed to the real threats from "AI" we're already facing, like YouTube's recommendation engine) may be a result of a point of view peculiar to Silicon Valley culture: https://www.buzzfeednews.com/article/tedchiang/the-real-dang...
I agree with you in a way, if AGI ends up being 300yrs out then work on safety now is likely not that important since whatever technology is developed in that time will probably end up being critical to solving the problem.
My main issue personally is that I'm not confident if it's really far out or not and people seem bad at predicting this on both sides. Given that, it probably makes sense to start the work now since goal alignment is a hard problem and it's unknown when it'll become relevant.
I read the BuzzFeed article and I think the main issue with it is he assumes that an AGI will be goal aligned by the nature of being an AGI:
"In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind."
Humans have general preferences and goals built in that have been selected for for thousands of years. An AGI won't have those by default. I think people often think that something intelligent will be like human intelligence, but the entire point of the strawberry example is that an intelligence with different goals that's very good at general problem solving will not have 'insight' that tells it what humans think is good (that's the reason for trying to solve the goal alignment problem - you don't get this for free).
He kind of argues for the importance of AGI goal alignment which he calls 'insight', but doesn't realize he's doing so?
The comparison to Silicon Valley being blinded by the economies of their own behavior is just weak politics that's missing the point.
We don't know that "goal alignment" (to use the techo-cult name) is a hard problem; we don't know that it's an important problem; we don't even know what the problem is. We don't know that intelligence is "general problem solving." In fact, we can be pretty sure it isn't, because humans aren't very good at solving general problems, just at solving human problems.