AI model rules will be as successful as any other prohibition, where outlaws will act with defacto impunity, while good people who commit sins of omission will be made arbitrary examples of. I'm sure there's a name for the dynamic, where policing rules of any kind are mainly enforced against people who generally abide by them, while simulataneously giving a huge arbitrage advantage to people who ignore them or are just outlaws.
There is another problem that doesn't have any good solutions yet that will be a huge part of AI governance, and that's software attestation (direct anonymous attestation). The basic problem is how does a program assert that it is an authentic instance of itself. We've been trying to solve it in security for apps, authenticators, and DRM for decades, and the solutions all seem fine until it's worth it to someone break it. I think it's probably just a poorly formed problem statement that defines itself in impossible terms, but when they can't govern AI models, they're going to try to govern which AI's can access what data and systems, and we're back to solving the same old cryptographic problems we've been working on for decades.
Why have any prohibitions on anything then? Will only help outlaws and criminals, no? Outlawing slavery, for example, only working against good people who commit sins of omission?
It comes down to life. Murder takes a life. Slavery takes someone's life too. Theft/fraud is taking life in the sense of time/effort spent toward the stolen material. We already have these prohibitions. We don't need new ones for every imaginable method of murder, slavery or theft.
The difficulty here is you have to attempt to predict the path of least harm, which many times is only discovered in hindsight. There will never be 100% compliance in anything. Which forward path has the best social impact with the least social costs is the question.
I think it really depends on what AI we are talking about and also what specifically is prohibited.
Not sure for me that it couldn't be an aggravating thing only, for example, or only certain "machinery" using AI has some prohibitions. Maybe even no specific prohibitions in the end.
Right now, there's no AI artificial limitation that isn't totally bypassable with extreme trivial effort.
If you want an AI that writes erotica, writes new Hitler speechs or so on.. it's here. Easy. Done.
Maybe we will see prohibitions in the future around using AI for say mortgage underwriting. I do think that is enforceable - underwriting has a compliance culture, auditing, etc. Not that someone won't be caught doing it or even fly under the radar for a while but that is generally "enforceable".
I'd offer this quote of a long sentence as Mr. Francis's tl;dr of his term:
"What we have in this country today, then, is both anarchy (the failure of the state to enforce the laws) and, at the same time, tyranny—the enforcement of laws by the state for oppressive purposes; the criminalization of the law-abiding and innocent through exorbitant taxation, bureaucratic regulation, the invasion of privacy, and the engineering of social institutions, such as the family and local schools; the imposition of thought control through “sensitivity training” and multiculturalist curricula, “hate crime” laws, gun-control laws that punish or disarm otherwise law-abiding citizens but have no impact on violent criminals who get guns illegally, and a vast labyrinth of other measures. In a word, anarcho-tyranny."
I've always thought anarcho-tyranny was a dumb neologism made up by people who hadn't read "The Origins of Totalitariansm," or anything else with historical accuracy by Arendt, even her "ideology and terror" essay. The thing its professors are still sounding out already has a canonical playbook.
There is another problem that doesn't have any good solutions yet that will be a huge part of AI governance, and that's software attestation (direct anonymous attestation). The basic problem is how does a program assert that it is an authentic instance of itself. We've been trying to solve it in security for apps, authenticators, and DRM for decades, and the solutions all seem fine until it's worth it to someone break it. I think it's probably just a poorly formed problem statement that defines itself in impossible terms, but when they can't govern AI models, they're going to try to govern which AI's can access what data and systems, and we're back to solving the same old cryptographic problems we've been working on for decades.