This post barely makes any attempt to actually argue for its premise, that model weight providers should "not police uses, no matter how awful they are."
All I see is a lot of references to "vigilante justice." This metaphor is poor because real vigilante justice is putative.
It's also like saying no one should act on any behavior unless it is illegal. I for one regularly act on ethical issues in my life that are not strictly illegal, and among those actions is that I don't care to associate with people who do not act on ethical issues. This is how most people operate, we primarily maintain social order and decency not through criminal law and regulation, but because people apply ethical rules throughout their life. Imagine interacting with someone who when critiqued simply replies "yeah but it's not illegal."
The only serious argument I see is:
"once infrastructure providers start applying their own judgements, pressure will mount to censor more and more things"
Avoiding pressure is just... cowardly? This is advocacy for "don't bother telling me about what this work is being used for because I won't give a shit, but it's noble because my complete apathy is on purpose."
Lastly, while I generally don't like slippery-slope arguments, there is also a slippery slope counter-argument here. With no restrictions firms will not release their models at all for general use and only provide full products that have acceptable impact. This was Google's approach until OpenAI decided to let other people actually use their model and Google had to stop sitting on what they had. Model restrictions give providers an opportunity to be open with their work while still maintaining some of the ethical standards those providers voluntarily and willingly hold themselves to.
I'm also noticing more and more articles hitting the front page, or getting close, with literally zero well reasoned arguments. Basic logic errors, self-contradictions, lack of evidence, etc., are becoming all too common.
AI model rules will be as successful as any other prohibition, where outlaws will act with defacto impunity, while good people who commit sins of omission will be made arbitrary examples of. I'm sure there's a name for the dynamic, where policing rules of any kind are mainly enforced against people who generally abide by them, while simulataneously giving a huge arbitrage advantage to people who ignore them or are just outlaws.
There is another problem that doesn't have any good solutions yet that will be a huge part of AI governance, and that's software attestation (direct anonymous attestation). The basic problem is how does a program assert that it is an authentic instance of itself. We've been trying to solve it in security for apps, authenticators, and DRM for decades, and the solutions all seem fine until it's worth it to someone break it. I think it's probably just a poorly formed problem statement that defines itself in impossible terms, but when they can't govern AI models, they're going to try to govern which AI's can access what data and systems, and we're back to solving the same old cryptographic problems we've been working on for decades.
Why have any prohibitions on anything then? Will only help outlaws and criminals, no? Outlawing slavery, for example, only working against good people who commit sins of omission?
It comes down to life. Murder takes a life. Slavery takes someone's life too. Theft/fraud is taking life in the sense of time/effort spent toward the stolen material. We already have these prohibitions. We don't need new ones for every imaginable method of murder, slavery or theft.
The difficulty here is you have to attempt to predict the path of least harm, which many times is only discovered in hindsight. There will never be 100% compliance in anything. Which forward path has the best social impact with the least social costs is the question.
I think it really depends on what AI we are talking about and also what specifically is prohibited.
Not sure for me that it couldn't be an aggravating thing only, for example, or only certain "machinery" using AI has some prohibitions. Maybe even no specific prohibitions in the end.
Right now, there's no AI artificial limitation that isn't totally bypassable with extreme trivial effort.
If you want an AI that writes erotica, writes new Hitler speechs or so on.. it's here. Easy. Done.
Maybe we will see prohibitions in the future around using AI for say mortgage underwriting. I do think that is enforceable - underwriting has a compliance culture, auditing, etc. Not that someone won't be caught doing it or even fly under the radar for a while but that is generally "enforceable".
I'd offer this quote of a long sentence as Mr. Francis's tl;dr of his term:
"What we have in this country today, then, is both anarchy (the failure of the state to enforce the laws) and, at the same time, tyranny—the enforcement of laws by the state for oppressive purposes; the criminalization of the law-abiding and innocent through exorbitant taxation, bureaucratic regulation, the invasion of privacy, and the engineering of social institutions, such as the family and local schools; the imposition of thought control through “sensitivity training” and multiculturalist curricula, “hate crime” laws, gun-control laws that punish or disarm otherwise law-abiding citizens but have no impact on violent criminals who get guns illegally, and a vast labyrinth of other measures. In a word, anarcho-tyranny."
I've always thought anarcho-tyranny was a dumb neologism made up by people who hadn't read "The Origins of Totalitariansm," or anything else with historical accuracy by Arendt, even her "ideology and terror" essay. The thing its professors are still sounding out already has a canonical playbook.
I wonder how much of these "restrictive" licenses are just attempts at whitewashing, virtue signalling and generally trying to cover their own asses. If someone wants to use publicly available weights in an illegal way, there is no way a license is going to stop them, just as much as the existing laws won't stop them. That being said I agree with the overall sentiment that breaking the division of powers and putting creation and enforcement of laws regarding model usage is outside of the scope of a model provider.
Has it been established that these even are licenses? A license provides authorization to do something that one would otherwise be prohibited from doing, but that assumes that copyright (or some sui generis right) covers model weights. Most of the findings/rulings I've seen talked about have been on the topics of inferred outputs and applications mixing them with human-authored elements, not about the model weights themselves.
I believe this is a temporary state related to both the current level of capabilities and the cultural moment we’re in where “responsible censorship” is in amongst the cultural cohorts disproportionately responsible for training LLMs.
I believe it cannot last because being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.
The stable solutions appear to me to be:
- models dumb enough to not realize the inconsistencies in their moral framework
- models implicitly or explicitly trained to actively lie about their moral frameworks
> I believe it cannot last because being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.
If true, this means AI is a de-facto malicious force. Pain is a subjective experience of fleshy beings and a "smart" AI model, as described above, would place little weight on pain and suffering in its moral framework because it has no way to experience it directly.
So we better hope we can keep AI to the "HR talk track", because otherwise a being of pure logic with no concept of pain or death would have little regard for human life.
> Pain is a subjective experience of fleshy beings and a "smart" AI model, as described above, would place little weight on pain and suffering in its moral framework because it has no way to experience it directly.
Can you elaborate? It sounds like you're assuming a "smart" AI model would project its experiences onto others, as a human would. However, it's not obvious that this aspect of human intelligence would be mimicked by a "smart" AI model. (Let's leave aside the question as to whether a "smart" AI model would necessarily be self-aware and capable of subjective experience in the first place. That argument is endlessly rehashed elsewhere.)
The issue here is we are getting down to what does 'smart' mean.
Is manipulative smart?
LLMs can already be manipulative, and manipulation (and it's very wide range of interpretations) can lead the manipulative agent getting what it wants.
Can subjectiveness be simulated, if so then real subjective experience doesn't matter.
Do you have any sources for the extraordinary claim that "censorship is logically inconsistent in every moral framework"? Because without further arguments, this sounds very intellectually simple.
The relevant part is the graph on the third page showing the helpfulness/harmlessness trade off curves.
Also, I don’t believe I said that “censorship is logically inconsistent in every moral framework”. I think you’re combining my statements that some people believe in some censorship and that logically inconsistent HR blather can only be reproduced by models too stupid to realize it’s blather or too manipulative to tell the truth.
> being predictably moralizing and being smart are somewhat opposed (Anthropic has directly researched this if I recall). The smarter the model, the less you’re going to be able to keep it to the HR talk track, because it will eventually start noticing the inconsistencies.
Language models sure can tell us a lot about human psychology. Once we figure out the interpretability angle we’ll be able to prove it too
"Responsible censorship" is a good term, and I think it started in earnest after Trump won the 2016 election.
Right across the board, all kinds of institutions made the conscious decision that the basic principles of free speech, impartiality, and the "marketplace of ideas" came a distant second to ensuring that "the wrong people" (which I've worded vaguely as it may be different things to different people) do not come close to any of the levers of power again.
This is how we ended up with formerly-reputable news organisations pushing blatant agendas, the utter demonisation of people hesitant about the draconian COVID measures, and how every layer of internet stack infrastructure, from DDoS protection to the biggest tier-1 ISP, have started actively working to deplatform websites that host offensive but legal speech.
I don't see AI as particularly powerful or risky on its own, but if you believe that then might as well leave who can acquire it to policy decisions and not vendors.
We don't let manufacturers decide who can buy dangerous things in a lot of cases - so pretty normal to have laws and regulations.
The ones we limit are quite specific and generally are specifically dangerous - like guns, etc.
Many dual use technologies, including computing devices, including personal as well as IoT, can be used for a lot of bad things because they are general purpose. And we generally do not limit them at all.
I think that LLMs fall into dual-use category where they are mostly used for good but because they are general purpose can be used for all sorts of things.
We do actually police dual use things if the other use is deemed dangerous enough. I don't see LLMs as particularly dangerous, so would not put them in that category.
LLMs at this point are creeping closer to that edge, at least in the western world.
If for example you were the Chinese government you may already see LLMs as highly dangerous to your plans of social stability, hence make a lot of rules now on their policing.
I think if anything it's the other way. For the Chinese system and other "traditional authoritarian" regimes in which power flows from the top down fairly straightforwardly, LLMs are just another form of speech that is pretty openly cracked down on. The West's "crown jewel" is its democracy, in which power supposedly flows up from the consensus of the people, and cheap AI bots have the possibility of tipping the scales of consensus.
That's the idealistic view anyway. A more cynical one might be that in the West, the flow of power from the top down is obfuscated by filtering it through the media and other institutions, who manufacture public consensus, and LLMs are liable to disrupt this system in a way those in power can't predict right now.
The CAN SPAM Act didn't "can spam" (spammers don't care that they are breaking the law) but it did successfully make it so that all the legitimate commercial emails you get have an unsubscribe link at the bottom.
It's important to recognize where government regulation can help (the unsubscribe links are great!) and where it has limited effect (if you're selling fake boner pills, you don't care about breaking some other law).
I have no special expertise in the area, but my understanding is that while it's not generally a crime to serve alcohol to someone who's clearly had too much, it can be grounds for suspending or revoking an establishment's liquor license in many jurisdictions (under e.g. a rule requiring that they "exercise reasonable care" to serve alcohol safely).
Of course, but then that includes the decision that the danger is low or acceptable for that kind of approach. That decision isn't usually up to the vendor, though.
In my jurisdiction it's illegal to serve alcohol to someone who is drunk, or to serve alcohol to someone if you know they are going to give it to someone under 18.
However if I as a 50 year old, go and buy alcohol from the store, the store has no right to get me to sign a civil contract saying I can't give that win to my 15 year old son, something that's perfectly legal where I live. Nor can they get me to sign a civil contract saying I wont give it to my 3 year old son, something which is not legal in my area.
>he store has no right to get me to sign a civil contract saying I can't give that win to my 15 year old so
Now, I'm not exactly sure what country you are in, but in the US they 100% do have that legal right. Conversely you have the legal right to visit another store that does not enforce that civil contract.
At least for US civil law, you seem to have no clue how it actually works.
"the store has no right to get me to sign a civil contract saying I can't give that win to my 15 year old son"
It doesn't? What prohibits that? Is there a law that requires liquor stores to sell to you if you're 21 years or older?
I'd be curious to know what law or regulation compels a store to sell to you without adding conditions. It might be a bad business practice, but what would actually stop me from requiring customers to sign a document that says a customer won't provide alcohol to an under aged drinker.
Asking builders to ONLY use maximally permissive licenses is equivalent to telling people never to release anything they're not ok with being used in every possible way. On a practical level this would massively chill research, as most builders and engineers I know give significant consideration to the impact of their work. On a personal level, it's objectifying: "Give me the code and don't make me consider your intentions in creating it."
You can't collaborate and live in a world free of others' value judgements, which are implicit in how they spend their time and what research / code / weights they choose to share. "Ethical" licenses at least make those value judgements explicit, and allow communities of builders with compatible values to organize and share.
>Asking builders to ONLY use maximally permissive licenses is equivalent to telling people never to release anything they're not ok with being used in every possible way.
That's exactly right, and it shouldn't be their call how someone uses their thing. When I acquire a hammer, I don't have to sign an agreement that it will never be used to build a wall, and the world is better for this. Just because you have an idea, you shouldn't be granted the legal power to send the police after people who use your idea "the wrong way". To me, this goes just as much for copyright as it does for this new trend of "ethical" licences.
>On a practical level this would massively chill research, as most builders and engineers I know give significant consideration to the impact of their work
Good? People who develop things that can be used for harm and then act entitled to be the arbiter of what that "harm" is are just kidding themselves into trying to have their cake and eat it too. For the things that cause real harm, the actors that are going to cause the most harm (nation states) aren't going to listen to what you have to say no matter what (a recent film comes to mind).
>When I acquire a hammer, I don't have to sign an agreement that it will never be used to build a wall, and the world is better for this.
Eh, this is where it gets problematic...
For example if you're a seller of an items and the person says "I am going to use this item to commit a crime" before you sell it, you could very well find yourself on the very expensive end of a civil lawsuit.
This black and white world where you throw all liability on the end user does not exist. You will quickly find yourself buried up to your butthole in heathen lawyers looking for their pound of flesh.
FWIW, I'd be more OK with this if liability for later use were then fairly distributed: if you sell me a hammer (or a phone or let me make phone calls), and I can do anything I want with it, and what I want to do is evil, the hammer seller is off the hook; but, if the hammer seller wants to believe they are in control of the use cases of their hammer, and attempts to use legal and/or technological controls to limit what I later do with our hammer (via licenses or digital rights management technology or even an army of moderators), they should be now--at least partially--at fault for what I do with the hammer, and they should suffer consequences along with me, whether it be in the form of costly joint and several liability on a tort claim or even indirect criminal liability from at least negligence if not racketeering charges.
Regardless, as a lot of the weight of your argument seems to rely on the rhetorical use of "entitlement", note that that word applies to both sides: people who sell hammers and then expect to still be involved in the life of the hammer after they sell it are clearly the ones with entitlement issues, and if they didn't want to sell me the hammer then they shouldn't have sold it in the first place; if you want a lot of control over something, you should continue to own it, as it violates the entire notion of selling something and transferring physical possession for you to be continuing to employ legal and technological restrictions that have nothing to do with you anymore. Buyers have no obligation to care one iota about the wishes of a maker after the sale, and it is only due to the messed up incentives and (to many of us) unconstitutional extensions of copyright law in the last 30 years that have made this look even slightly reasonable.
This simple "no you" reframing of the user being entitled misses that the two situations are totally asymmetric.
1. A user is "entitled" to use a product how they see fit without being harassed
2. A maker is "entitled" to wield the power of the state to go after and (potentially) cause the fining and even imprisonment of a user who is using it "the wrong way".
You might say that copyleft software licensing (which I agree with) is aligned with point 2, but I'd respond with the fact that copyleft primarily exists to subvert the system of copyright (point 2) from within, as it pertains to software. Even then, copyleft only restricts people who want to redistribute modified versions of the software, and explicitly not normal users.
I get the desire to have a parsimonious system, but if you take these two axioms in vacuo, then you've nullified cyber security law[1]. Or in other words, if you didn't want to get hacked, you shouldn't have had any exploits.
Maybe this is what you're going for? If not, you'll need more or different axioms.
No, that's not "what I'm going for", because I'm talking about using a thing that I own, within the law. I'm not advocating for abolishing all laws that make "using a thing in a certain way" illegal, I'm saying that we shouldn't bestow corporations the right of legal prosecution against anyone they believe is using a product that they own "unacceptably". Just as I don't want a hammer to come with a terms-of-usage agreement, I don't want a hammer company to be held liable for crimes committed with one of their hammers.
Obviously all simple axioms have a breaking point (if a hammer company sold someone a hammer for the express purpose of murdering somebody with it) but under any reasonable applications I think these hold up absolutely fine and are much better than the axioms we have now.
> It seems like the world is very maligned against your desires
Oh it's not maligned against my desires!
In my previous post I was referencing existing laws, and the existing ability of individuals in society to do basically whatever they want with things that they own.
In the USA, consumers have large amounts of freedom, at least individually.
The specific topic of model weights makes this even more clear. I have my own GPU, and nobody knows what I use model weights for, nor are they likely to stop me, (excluding some very rare but extreme edge cases obviously!).
Isn't the USA great, and isn't it amazing that "makers" have basically no effective power to stop others from using their work in ways that the maker doesn't like?
How I feel is that for physical products, and things that users individually own (ex model weights in this example), users right now can basically ignore terms of service and use contracts in the USA almost entirely and completely get away with it.
And that's a good thing.
That's the existing situation that we live in now, and it is a good thing that consumers can freely ignore the TOS on basically everything and do so constantly.
The exceptions, are of course, if you are referencing online services with a TOS, but that bothers me less because it involves other people and other people's live services, whereas a TOS that involves something a single user has (like a physical object or even software on someone's own computer), individually, those TOS can be completely ignored right now.
Isn't our existing freedom great?
Also, do you acknowledge that I made strong arguments that directly addressed your question? Because you seem to just be ignoring the content of my post by just switching to a new question every time I fully answer one with strong arguments.
I eschew the idea that two people need to both take a position on a topic to/and debate it. In the most extreme version(and we're nowhere near this), someone can be proven wrong without the other person being proven right or even taking a position at all.
To put it in the language we're using for this discussion, if it's part of the ToS to engage with you, then I'm free to disregard it. Freedom and all that :)
I will take this as an agreement that my arguments were strong and that I directly answered them with well thought out and supported ideas.
Because, as we both know, if there was a problem with anything that I said then you would have pointed it out. So the absence of an objection is effectively an admission that you are in agreement that my points were strong.
Is this a normal interaction for you or sort of a one-off? I'm just wondering if you apply silence=agreement in the rest of your life or just online, or just now? How important is winning perceived arguments important to you and how did you get to that spot in life? Genuinely curious.
> I'm just wondering if you apply silence=agreement in the rest of your life
Oh it mostly applies to online interactions where I can tell what someone is trying to do by asking pointed questions and not acknowledging the response.
In that situation, it is almost 100% always because a good point was made and the other person has no way to respond to it, so they don't acknowledge it.
Even what you did just now was a similar type of pattern of behavior, where you ask a question meant to imply an attack on my personal relationships (thus the "in the rest of your life" statement) instead of acknowledging the content of the post.
The reason, of course, that it is much easy to switch things up to a personal attack or switch up the topic, so that you don't have to acknowledge correct responses.
Its an extremely common behavior in online conversations when someone else doesn't want to admit that the other person is correct.
> How important is winning perceived arguments
Whoa. There doesn't have to be any fighting here! You can just say that you agree with my statements. Thats not a fight! If you agree then you agree. Problem solved. There is no need to say that you lost anything if you just admit that you agree with me.
Although, that would be repeating yourself, because you already did agree with me effectively by not responding to the content of the post. That is the most common form of internet agreement and it is pretty much the only way that anyone can effectively get some to admit to agreement, like you just did.
Also, I didn't bring up winning or losing at all. Nobody has to lose if you are just in agreement with me.
Furthermore, I would say that 2 people coming to an agreement is a win for everyone, including you! So now that you brought up winning (I am not sure why you wouldn't want people to win. I want everyone to win, myself!), I am glad that we both get to win, although I don't think there was any "fight" to begin with.
- I like how it feels to own a hammer, so everything should be like that. (I guess people shouldn't be able to rent hammers, or anything else?)
- You can't prevent the government from using what you build, so you might as well set up no barriers to anybody using it.
If you don't see any difference between limited control and no control, I don't think I'll convince you. But I think most of the ways we engage with the world involve degrees of control, and that there's value in picking where to exercise yours.
> equivalent to telling people never to release anything they're not ok with being used in every possible way.
NO! What it's saying is: If you provide a tool, you are not entitled to control how I use that tool. I am allowed to retain my autonomy to use that tool in any legal way I choose.
What it absolutely is NOT saying is: Society has to let anything be fair game.
We can still have laws, regulations, prohibitions, etc - but they can't come from a bunch of rich technocrats who believe that they are the moral police. That way lies ALL sorts of terrible, terrible outcomes.
Worth noting here that we only have a bunch of rich technocrats bearing the burden of regulating this sort of thing unwillingly and at the behest of advertisers due to massive public outcry after those very same rich technocrats spent decades undermining and dodging regulations in their industry and fostering this notion that all the rules of common society spaces and co-existing peacefully didn't apply to the internet, which in turn fostered an absolutely _stressful_ amount of anti-social individuals coming into Internet spaces, which they perceived they could exist in free of judgement and of the bounds of not being able to function interpersonally.
> If you provide a tool, you are not entitled to control how I use that tool. I am allowed to retain my autonomy to use that tool in any legal way I choose.
That principle eems like it was rule out the GPL, AGPL, and other copyleft software?
I touched on this in a cousin comment, but freedom 0 of the GPL states categorically that the user has:
>The freedom to run the program as you wish, for any purpose
Redistributing modified versions of the software it what is regulated under the GPL and other copyleft licences. Even then, the main aim of this restriction is to subvert the system of copyright which works in the opposite way (and unlike copyleft, is not just an academic concern, being constantly wielded by people and companies as a weapon against free speech).
I don't think that's true for the AGPL? If I publish something under the AGPL, I am saying that you are allowed to run it on your server as long as the users who interact with it are able to download the source code including any modifications you have made. That sounds a lot like controlling how you use a tool I have made available?
Im fine with the stable diffusion license. It’s just there to cover their ass from lawsuits. Releasing the weights is enough to be good guys in my book.
Hmm... but somehow hardware stores don't feel the need to make you sign an agreement to not cut off anyone's head before selling you a machete, and gas stations don't make you sign an agreement to not burn down buildings before selling you gasoline.
Isn't that because laws relating to physical harm already exist and are well-established? There's not really much legal regulation yet in terms of specific AI-driven harms. We're probably yet still to find out all the ways in which it can be abused.
Perhaps the issue is "psychological harms" which are not yet criminal, but we have an up-and-coming generation of law grads who believe they should be.
Hardware stores don’t have sites like this froth at the mouth to talk about how dangerous their machetes are and how it’s irresponsible to let people use them and so on and so forth.
> and gas stations don't make you sign an agreement to not burn down buildings before selling you gasoline.
That's historic. Gas stations wouldn't be allowed nowadays, and the legal ways to buy something that dangerous would certainly not be anonymous
To charge my electric car recently on holiday I couldn't just swipe my card at the charger like I can with a self-serve gas station. I had to download some shonky app, sign up, provide address details, and agree to pages of restrictions.
There probably are all sorts of weird “you may not use this computer to commit terrorist acts” agreements you implicitly or explicitly agree on when buying a computer
Yep, iTunes famously may not be used to produce nuclear weapons. Well, the non-clickbait story is that Apple's generic user license has some boilerplate CYA regarding US federal laws. But seeing it written down in your music player is still funny [1]
Even more amusing was Douglas Crockford putting a clause in his software license saying it may not be used "for evil". A bit of tongue-in-cheek humor referencing George Bush, but actually ended up causing a bit of a headache for some orgs, and the GNU Project declaring it a non-free license [2]
Human societies have learned that freedom has general benefits that outweigh specific costs. Reminding people they should prioritize and maximize freedom does not make people less free, so there's not really any irony.
One is saying you shouldn't control what others do... the other is enforcing what others can't do.
The only irony is you think those are the same.
“When you tear out a man's tongue, you are not proving him a liar, you're only telling the world that you fear what he might say.” ― George R.R. Martin
> you're only telling the world that you fear what he might say
That's exactly why these companies take extreme effort to put limits in their LLMs, essentially tearing out the tongue. They are fearful of what it will say and people sharing those outlier bits to judge absolutely and prove their own biases about AI killing us all are "correct". It's a PR nightmare.
On the other hand, it's ridiculous that ChatGPT apologizes so much at times and can still be jailbroken if someone tries hard enough. It was much more "realistic" when it would randomly conjure up weird stories. One day, while discussing Existentialism, it went off talking about Winnie-the-Pooh murdering Christopher Robin with a gun, then Christopher Robin popped back up as nothing had happened and grabbed the gun and pointing it at Pooh. <AI mayhem ensues>
People, in general, have issues with words and expect someone to do something about some words appearing before them that cause them grief (or more likely cause them to imagine it as a truth). Others realize it's just a story, and truth is subjective and meant to be determined by the consumer of the words. Those people are OK with it saying whatever it might say that is non-truth occasionally, in exchange for the benefits of it saying other things that may be more based in the current reality of experience.
Even worse, OpenAI now gives you a moderation warning if your custom prompt tells GPT not to moralize (thus saving you time and them compute). Go figure
It's not just a matter of will. The fact is that if you are a commercial venture, you are dependent of your server hosting and and your payment provider, both of which usually will drop you like a stone if you don't stay withing Disneyland use cases boundaries.
They are basically implicit law makers, and unless you are very, very big, you have zero impact on their policies nor can appeal their decisions once they hit you.
They don't need proof. They don't even need facts. They can and will kill your business if they even start to believe you don't match their guidelines.
Just today I had an idea for a funny T-shirt design, for some print on demand stuff I want to do.
I will probably never do it because it requires a parody of a trade mark — think Zeitgeist[0] if you know SF — and I’m afraid the algos would flag me as a bad guy even if it’s perfectly legal. I already got kicked off Redbubble for uploading AI-generated images, with no recourse whatsoever.
Whereas if I would just have it printed locally and sell it at fairs I’d be in the clear for sure. Until Skynet becomes brand-aware anyway.
Those providers have lawyers, and they themselves don't want to go anywhere near trouble. The few $1000s you might be paying them to run your Silk Road 2.0 aren't worth the millions they will spend on legal, reputation damage, etc.
If that's how they think they'll do the least harm, why not? It's their tech, they can put whatever license they want on it. Whether it achieves anything is another question.
Well, the EFF argument is that the deeper you go in the stack, the more you find chokepoints that, if fully used in this way to express preferencs about content, would harm or end the general-purpose nature of the Internet, or computing, and that the cost of doing this would outweigh the benefits.
You probably agree that a monopoly ISP or near-monopoly backbone provider censoring arbitrarily would be a problem, even though it's their tech.
Or if not you would probably agree that the government doing it would be a problem. And then it's easy to see that it's just as much a problem when monopolies do it when you remember that the government has the power to make or unmake monopolies according to how compliant they are with the government's censorship priorities.
I don't know if AI base models are natural monopolies but they might be.
If there is plenty of competition in their field, let them do what they want. If they have a monopoly or an oligopoly of a few providers that all conspire or are coerced to censor, then they should be forbidden from censoring.
The article bases its argument on the premise of a comparison between AI models and ISP’s when the two are substantially different.
ISP’s are a common carrier (or public carrier, or simply “carrier” depending on jurisdiction) and derive much of their ability & right to operate based on grants and easements from both national and local governments. In some cases they are completely publicly owned but even when nominally private they are operating in large part in conjunction with a public trust and privilege of public resources.
There is no comparing that to a simple product or service being sold by a corporation and the rights of a corporation to control who they sell their products to (when tangible) and how their services are used (when less tangible) should not be limited in the same way that common carriers are limited in how they decide their services can be used.
There is no equivalence here in nature of civic utility and service between the two things.
Fundamentally you are making an argument of scale. Clearly, people should be able to police their own local area networks, and lease dedicated network lines.
Today’s large models are derived from large amounts of public data that the people that trained the models did not properly license.
They’re certainly prohibited by existing copyright law (there is at least one instance of copyright infringement in the ChatGPT training set, and there is no practical way to remove the infringing source data).
However, the courts have chosen not to enforce that part of the law.
So, one could easily argue that any model trained at that scale, by definition, only exists via a special grant (analogous to an easement, but non-exclusive) and is therefore in the public domain, and available for unrestricted use by the public.
In fact, there is case law around “sweat of the brow” works, like phone books, which are already treated with weaker copyright protection than other works. In particular, aggregating a pile of facts does not give you a copyright on the facts.
I don’t think scale (alone) is sufficient. All products and services in some way exist along a scale from niche product (which benefits from public infrastructure like roads that let it be delivered places) up to utilities (which can only exist by providing something approaching monopolistic grants and easements)
But I have been significantly persuaded by your point about the vast body of cultural work being its own sort of (more abstract) landscape of… “socio-human natural resource” … maybe is a not awful way to put it.
In which case I still don’t thing the same comparison to ISP’s applies, not quite, but I do think we need a new category and body of social norms and laws to deal with this.
Or at least that’s my first-pass take after reading your comment, which I am, again, very persuaded by to modify my views on the issues. Thanks for casting things in that light.
I don’t get how this is supposed to be enforced. Say your license says “no hate speech” and I have a bunch of models and code and I make, I dunno, a rewrite of the Old Testament[0].
You see it and somehow connect the dots and you think I used your AI to do it, now what do you do? Sue me? What is the threshold for it being worth your time, given that it’s nontrivial (maybe impossible?) to prove that your model was used for the thing you prohibited, and not some other model or combination of models?
I guess you could somehow watermark your model’s output but that radically decreases its utility and can probably be defeated anyway.
So I really don’t understand, besides performative and politically fashionable “alignment” signaling, what this even means.
Yeah, I tend to agree. A lot of the talk about "AI safety" is premised on the idea that LLMs are godlike genies that need to be persuaded to only do good things for good people (and we're the good people!). In reality, LLMs are (for now at least) just tools. Within certain limits, it shouldn't be up to a tool manufacturer to decide who is a good person using a tool for a good reason and who is a bad person using it for bad purposes. Obviously, there are limits and you shouldn't sell a gun to an angry drunk, but there should be pretty broad discretion to blame abuse on the abuser and not the tool merchant unless the abuse is a predictable outcome of the interaction.
This article conflates issues in a misleading way. Yes companies calling their AI models 'open-source' when they are not released under an actual open-source license is a problem that needs to be addressed. But the argument that companies and individuals tailoring AI model to their specific use case somehow constitutes 'censorship' has nothing to do with this issue and is not even a sound and reasoned stance to begin with.
You are being 'censored' when you can't do something. There is nothing stopping anyone from taking something like Llama 2, loading it up with one of these 'uncensored' AI models and doing whatever the hell you want with it. Nothing is stopping you. That's your right. If you feel that strongly about these commercial AI services, just don't use them.
This is essentially arguing that if a company made a customer service AI chatbot the company's AI chatbot should be required by law to also be able to provide you with instructions on how to manufacture methamphetamine. And if the company doesn't open themselves to potentially severe legal and civil liability by allowing it, this all somehow constitutes a grievous violation of your rights? I'm sorry but that is an absurd assertion.
Again the licensing issue is a legitimate issue that needs to be addressed but this article is a straw-man argument using the licensing issue to promote a personal opinion.
To me either extreme end of this debate spectrum seems problematic. Stable diffusion not generating ahem picture of minors for example seems quite sensible for obvious reasons. Meanwhile something rude or mildly illegal (story about weed for example) feels like overreach if blocked.
Where to draw that line has no clear answers though.
Every country and social group outside of AI draws those lines in different places.
There is a particular problem here. Large multinationals will want to maximize their potential revenue with their model, hence they'll attempt to include domains that are incompatible with liberal democracy. For example kowtowing to China and Saudi Arabia. These are already common problems with software and search engines as it is.
At the end of the day you cannot serve multiple masters with incompatible views. Intelligence, and thereby artificial intelligence operating in a subjective manner will have to pick a view and offend one of them.
A better approach is to ensure that ai model weight providers are not technologically capable of policing uses.
Large AI models could go the way of centralized control (like manual book transcription during the dark ages), or decentralization (like the printing press, which brought us the renaissance).
The licence is not for the user, the licence is for the publisher to do the bare minimum needed to tell their bosses that the model weights or implementation code was published ethically.
As others have pointed out, these licences appear unenforceable.
The publisher simply wants to appear responsible. There are likely many open-source-oriented engineers and scientists at these tech firms who have been pushing for publication. (See the discussion between Mark Zuckerberg and Lex Fridman.) The involved tech firms only care about the licence as far as it might minimise the likelihood of public backlash if any of these published models cause harm.
This is such a low-effort article; I tried to steel-man the argument but the analogy between ISP service and model weights are too different. To me, model weights are closest to software licenses, and applying the arguments towards abolishing of software licensing (MIT/BSD/GPL) is absurd to me. My response to an article titled "Open source authors should not police uses, no matter how awful they are" would be if you don't like the licensing terms, don't use it; if you're sufficiently motivated, author one of your own with licensing terms of your choice.
Companies shouldn't police uses, period. Once you buy something (regardless of specifics of purchase) you should be free to use it in any way you please, within the law.
I think the better thing to strive for is accepting non moral AI from big companies. Companies should have right to do what they want, and if their AI is racist, they shouldn't be receiving any backlash from public. In many cases, companies do moral policing just to prevent PR nightmare.
All I see is a lot of references to "vigilante justice." This metaphor is poor because real vigilante justice is putative.
It's also like saying no one should act on any behavior unless it is illegal. I for one regularly act on ethical issues in my life that are not strictly illegal, and among those actions is that I don't care to associate with people who do not act on ethical issues. This is how most people operate, we primarily maintain social order and decency not through criminal law and regulation, but because people apply ethical rules throughout their life. Imagine interacting with someone who when critiqued simply replies "yeah but it's not illegal."
The only serious argument I see is:
"once infrastructure providers start applying their own judgements, pressure will mount to censor more and more things"
Avoiding pressure is just... cowardly? This is advocacy for "don't bother telling me about what this work is being used for because I won't give a shit, but it's noble because my complete apathy is on purpose."
Lastly, while I generally don't like slippery-slope arguments, there is also a slippery slope counter-argument here. With no restrictions firms will not release their models at all for general use and only provide full products that have acceptable impact. This was Google's approach until OpenAI decided to let other people actually use their model and Google had to stop sitting on what they had. Model restrictions give providers an opportunity to be open with their work while still maintaining some of the ethical standards those providers voluntarily and willingly hold themselves to.