> Or they do exist for a reason but the rules are still absurd and net harmful
Ok.
…but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?
Shouldn’t you be asking: does the law do a useful thing? Does it make the world better? Is it compatible with our moral values?
Organ harvesting.
Stem cell research.
Human cloning.
AI.
Slavery.
How can anyone stand there and go “well China will do it so we may as well?”
In an abstract sense this is a fundamentally invalid logical argument.
Truth on the basis of arbitrary assertion.
It. Is. False.
Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.
What we do is not governed by what China may or may not do.
If you want to win the “AI race” to AGI, then investment and effort is required, not allowing an arbitrary “anything goes” policy.
China as a nation is sponsoring the development of its technology and supporting its industry.
If you want want to beat that, opposing responsible AI won’t do it.
Of course you have to consider what other countries will do when you create your laws. The notion that you can ignore the rest of the world is both naive and incredibly arrogant.
There are plenty of technologies that absolutely do not "make the world better" but unfortunately must get built because humans are shitty to each other. Weapons are the obvious one, but not the only one. Often countries pass laws to encourage certain technologies or productions so as not to get outcompeted or outproduced by other countries.
The argument here about AI is exactly this sort of argument. If other countries build vastly superior AI by have fewer developmental restrictions, then your country maybe both at a military disadvantage but also at an economic disadvantage because you can be easily outproduced by countries using vastly more efficient technology.
You must balance all the harms and benefits when making laws, including external to the country issues.
I don't think the government is talking about AI for weapons. of course that will be allowed. It's the US, we have the right to kill people. Just not make fake porn videos of them.
> ...but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?
That depends on what "it" is. If it's slavery and the US but not China banning slavery causes there to be half as much slavery in the world as there would be otherwise, it would be stupid.
But if it's research and the same worldwide demand for the research results are there so you're only limiting where it can be done, which only causes twice as much to be done in China if it isn't being done in the US, you're not significantly reducing the scope of the problem. You're just making sure that any benefits of the research are in control of the country that can still do it.
> Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.
Of course it is. You could very easily pass laws that de facto prohibit AI research in the US, or limit it to large bureaucracies that in turn become stagnant for lack of domestic competitive pressure.
This doesn't even have anything to do with the stated purpose of the law. You could pass a law requiring government code audits which cost a million dollars, and justify them based on any stated rationale -- you're auditing to prevent X bad thing, for any value of X. Meanwhile the major effect of the law is to exclude anybody who can't absorb a million dollar expense. Which is a bad thing even if X is a real problem, because that is not the only possible solution, and even if it was, it could still be that the cure is worse than the disease.
Regulators are easily and commonly captured, so regulations tend to be drafted in that way and to have that effect, regardless of their purported rationale. Some issues are so serious that you have no choice but to eat the inefficiency and try to minimize it -- you can't have companies dumping industrial waste in the river.
But when even the problem itself is a poorly defined matter of debatable severity and the proposed solutions are convoluted malarkey of indiscernible effectiveness, this is a sure sign that something shady is being evaluated.
A strong heuristic here is that if you're proposing a regulation that would restrict what kind of code an individual could publish under a free software license, you're the baddies.
> Of course it is. You could very easily pass laws that de facto prohibit AI research in the US, or limit it to large bureaucracies that in turn become stagnant for lack of domestic competitive pressure.
…
> A strong heuristic here is that if you're proposing a regulation that would restrict what kind of code an individual could publish under a free software license, you're the baddies.
Sure.
…but those things will change the way development / progress happens regardless of what China does.
“We have to do this because China will do it!” is a harmful trope.
You don’t have to do anything.
If you want to do something, then do it, if it makes sense.
…but I flat out reject the original contention that China is a blanket excuse for any fucking thing.
Take some darn responsibility for your own actions.
> What we do is not governed by what China may or may not do.
Yes it is... Where the hell would you get the impression we don't change how we govern and invest based on what China does, is doing, or might be doing? Do you really think nations don't adjust their behavior and laws based on other counties real or perceived? I can't imagine you're that ignorant.
> If you want want to beat that, opposing responsible AI won’t do it.
I could be wrong; maybe what China does with its AI developments will significantly and drastically alter the current startup status quo for AI startups.
Maybe the laws around AI will drastically impact the ability of startups to compete with foreign competitors.
…but I can’t see that being likely.
It seems to me that restricting chip technology has a much much more significant impact, along with a raft of other measures which are already in place.
All I can see when I look closely at arguments from people saying this kind of stuff is people who want to make deep fakes, steal art and generate porn bots crying about it, and saying it not fair other people (eg. Japan, where this has been ruled legal, China for who knows what reason, mostly ignorance) are allowed to do it.
I’m not sympathetic.
I don’t believe that makes any difference to the progress on AGI.
I don’t care if China out competes other countries on porn bots (I don’t think they will; they have a very strict set of rules around this stuff… but I’ll be generous and include Japan which probably will).
You want the US to get AGI first?
Well, explain specifically how you imagine open source (shared with the world) models, and open code sharing vs. everything being locked away in a Google/Meta sandbox helps?
Are you sure you’re arguing for the right side here? Shouldn’t you be arguing that the models should be secret so China can’t get them?
Or are you just randomly waving your arms in the air about China without having read the original article?
What are you even arguing for? Laws are bad… but sharing with China is also bad… but having rules about what you do is bad… but China will do it anyway… but fear mongering and locking models away in big corporations behind apis is bad… but China… or something…
> It seems to me that restricting chip technology has a much much more significant impact, along with a raft of other measures which are already in place.
Restricting chip technology is useless and the people proposing it are foolish. Computer chips are generic technology and AI things benefit from parallelism. The only difference between faster chips and more slower chips is how much power they use, so the only thing you get from restricting access to chips is more climate change.
> All I can see when I look closely at arguments from people saying this kind of stuff is people who want to make deep fakes, steal art and generate porn bots crying about it, and saying it not fair other people (eg. Japan, where this has been ruled legal, China for who knows what reason, mostly ignorance) are allowed to do it.
The problem is not that people won't be able to make porn bots. They will make porn bots regardless, I assure you. The problem is that the people who want to control everything want to control everything.
You can't have a model with boobs in it because that's naughty, so we need a censorship apparatus to prevent that. And it should also prevent racism, somehow, even though nobody actually agrees how to accomplish that. And it can't emit foreign propaganda, defined as whatever politicians don't like. And now that it has been centralized into a handful of megacorps, they can influence how it operates to their own ends and no one else can make one that works against them.
Now that you've nerfed the thing, it's worse at honest work. It designs uncomfortable apparel because it doesn't understand what boobs are. You ask it how something would be perceived by someone in a particular culture and it refuses to answer, or lies to you because of what the answer would be. You try to get it to build a competing technology to the company that operates the thing and all it will do is tell you to use theirs. You ask it a question about the implications of some policy and its answer is required to comply with specific politics.
> Well, explain specifically how you imagine open source (shared with the world) models, and open code sharing vs. everything being locked away in a Google/Meta sandbox helps?
To improve it you can be anyone anywhere vs. to improve it you have to work for a specific company that only employs <1% of the people who might have something to contribute. To improve it you don't need the permission of someone with a conflict of interest.
> Are you sure you’re arguing for the right side here? Shouldn’t you be arguing that the models should be secret so China can’t get them?
China is a major country. It will get them. The only question is if you will get them, in addition to China and Microsoft. And to realize the importance of this, all you have to ask is if all of your interests are perfectly aligned with those of China and Microsoft.
False equivalency at its finest. This is more akin to banning factories and people rightly saying our rivals will use these factories to out produce us. This is also a much better analogy because we did in fact give China a lot of our factories and are paying a big price for it.
Ok.
…but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?
Shouldn’t you be asking: does the law do a useful thing? Does it make the world better? Is it compatible with our moral values?
Organ harvesting.
Stem cell research.
Human cloning.
AI.
Slavery.
How can anyone stand there and go “well China will do it so we may as well?”
In an abstract sense this is a fundamentally invalid logical argument.
Truth on the basis of arbitrary assertion.
It. Is. False.
Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.
What we do is not governed by what China may or may not do.
If you want to win the “AI race” to AGI, then investment and effort is required, not allowing an arbitrary “anything goes” policy.
China as a nation is sponsoring the development of its technology and supporting its industry.
If you want want to beat that, opposing responsible AI won’t do it.