The US government probably doesn't think it's behind.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I'd go further and say the US government wants "an instrument more powerful than any nuclear weapon" to be built in its territory, by people it has jurisdiction over.
It might not be a direct US-govt project like the Manhattan Project was, but it doesn't have to. The government has the ties it needs with the heads of all these AI companies, and if it comes to it, the US-govt has the muscle and legal authority to reign control over it.
A good deal for everyone involved really. These companies get to make bank and technology that furthers their market dominance, the US-govt gets potentially "Manhattan project"-level pivotal technology— it's elites helping elites.
Unless China handicaps the their progress as well (which they won’t, see made in China 2025), all you’re doing is handing the future to deepseek et al.
What kind of a future is that? If China marches towards a dystopia, why should Europe dutifully follow?
We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.
Do Zambians currently live in an American dystopia? I think they just do their own thing and don't care much what America thinks as long as they don't get invaded.
What I meant is: Europe can choose to regulate as they do, and end up living in a Chinese dystopia because the Chinese will drastically benefit from non-regulated AI, or they can create their own AI dystopia.
If you are suggesting that China may use AI to attack Europe, they can invest in defense without unleashing AI domestically. And I don't think China will become a utopia with unregulated AI. My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have. But if things go sideways they may regret it too.
Not attack, just influence. Destabilize if you want. Advocate regime change, sabotage trust in institution. Being on a defense in a propaganda war doesn't really work.
With US already having lost ideologigal war with russia and China, Europe is very much next
> If you are suggesting that China may use AI to attack Europe
No - I'm suggesting that China will reap the benefits of AI much more than Europe will, and they will eclipse Europe economically. Their dominance will follow, and they'll be able to dictate terms to other countries (just as the US is doing, and has been doing).
> And I don't think China will become a utopia with unregulated AI.
Did you miss all the places I used the word "dystopia"?
> My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have.
Comparing China when I was a kid, not that long ago, to what it is now: It is a dystopia, and that dystopia is responsible for much of the improvements they've made. Enjoying what they have doesn't mean it's not a dystopia. Most people don't understand how willing humans are to live in a dystopia if it improves their condition significantly (not worrying too much about food, shelter, etc).
We don't know whether pushing towards AGI is marching towards a dystopia.
If it's winner takes all for the first company/nation to have AGI (presuming we can control it), then slowing down progress of any kind with regulation is a risk.
I don't think there's a good enough analogy to be made, like your nuclear power/weapons example.
The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.
As with nuclear weapons, there is non-negligible probability of wiping out the human race. The companies developing AI have not solved the alignment problem, and OpenAI even dismantled what programs it had on it. They are not going to invest in it unless forced to.
We should not be racing ahead because China is, but investing energy in alignment research and international agreements.
This thought process it not different than it was with nuclear weapons.
The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.
The EU can say all it wants about banning AI applications with unacceptable risk. But ASML is still selling machines to TSMC, which makes the chips which the AI companies are using. The EU is very much profiting off of the AI boom. ASML makes significantly more money than OpenAI, even.
US government is behind because Biden admin were pushing strongly for controls and regulations and told Andersen and friends exactly that, who then went and did everything in their power to elect Trump, who then put those same tech bros in charge of making his AI policy.