Unpopular opinion:
I don't believe that is the case. AI has always been in the realm of math and computer science. And the researchers fired was researching only the ethics of it. A lot of researchers are apolitical and only there for the challenge or the pay.
If anything, I think what happens deters investment in AI ethic research. For Google, it has only produced bad PR without any value to their influence and market value in the AI community.
Edit: Also Google's strategy of giving away a production level AI library like Tensorflow has basically locked up their status in the AI community for years to come.
The article mentions the impact on the broader research community too. Samy Bengio is extremely well known, not for ethics. He's one of the original authors of Torch.
They won't have trouble recruiting low and mid level AI engineers, simply because the pool is large. But the pool of highly experienced engineers and leaders in the space is much smaller, and even a moderate reduction in interest there could have huge long-term implications for the company--especially since those high-level contributors have greater financial freedom to follow their principles.
You live in la la land if you think they will have a problem hiring AI leaders. If you pay them they will come. Even the couple of researchers who got fired realized they made a mistake. Bengio left but he was already there for 15 years and is a multi-millionaire. He probably would've left anyway sooner or later.
I think you misunderstand how much leading AI researchers will be learning from complexity science and networks, and how much they'll understand [informational] diversity itself as critical to all networks (including their own community of practice).
My assumption is that there is a dovetail between understanding the role of randomness and diversity in neural networks, and understanding of its role in the social computation of society. Those who don't see that relationship and don't put it into practice in their own value systems will not be "at the top", because they are partially blinded.
Just my hot-take. Might be wrong. But I think there's a bias that favours these value systems amongst top tier researchers, unlike traditional hard comp eng studies.
If anything, I think what happens deters investment in AI ethic research. For Google, it has only produced bad PR without any value to their influence and market value in the AI community.
Edit: Also Google's strategy of giving away a production level AI library like Tensorflow has basically locked up their status in the AI community for years to come.