> Saying we shouldn't "tap the brakes" on AI out of safety concerns
I didn't say that we shouldn't tap the brakes, nor is that the only strategy. Other ones include, in rough order of viability: global economic sanctions on hostile actors attempting to develop AGI; espionage/sabotage of other AGI effort (see the Iran centrifuges); developing technologies and policies meant to diminish the impact of a hostile actor having AGI; and military force/invasion of hostile actors to prevent the development of AGI.
I'm sure you can think of others - regardless, there are far more options than just "more AI research" and "less AI research".
I didn't say that we shouldn't tap the brakes, nor is that the only strategy. Other ones include, in rough order of viability: global economic sanctions on hostile actors attempting to develop AGI; espionage/sabotage of other AGI effort (see the Iran centrifuges); developing technologies and policies meant to diminish the impact of a hostile actor having AGI; and military force/invasion of hostile actors to prevent the development of AGI.
I'm sure you can think of others - regardless, there are far more options than just "more AI research" and "less AI research".