> We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We're exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.
Seems even OpenAI can't resist the massive amount of money to be made in autogenerated smut. They've probably seen the huge popularity of their less "morally scrupulous" competitors and decided they want a piece of that pie.
It makes sense for them to start allowing, unlike the other rules this one does not seem to violate a law, someone's privacy, or copyright.
I still get why they made it blocked by default, it would be a goldmine for clicks to create "news" on how "ChatGPT can generate smut" and "How ChatGPT is harmful to children, etc".
Were they ever not interested in it? It's pretty blatantly obvious that all of the hand-wringing over AI safety was an excuse for their pivot into closing off and monetizing everything. I mean, nobody really thinks they were just so afraid about what humanity might do with GPT3 that they simply couldn't release the weights and instead had to offer it through a monetized inference API... right?
Not really surprised that they did, since it's unclear how else they could possibly proceed, though the level of outright dishonesty for why and cognitive dissonance surrounding the whole thing ("Open" AI? lol) will make this an unavoidable recurrence in any discussion about them. Gradually many of the safeguards will fall simply because the alternatives with less safe guards are probably "good enough" that many see no issue in eschewing OpenAI entirely if they can get the job done elsewhere without worrying about it. When it comes to smut the bar for what's good enough can probably get pretty low so I kinda am not surprised.
(edit: Though I think it also does depend. No doubt they have their eyes set on regulatory capture too, and being the best at stupid safeguards could give them an advantage.)
GPT3 wasn't and isn't the super-human intelligence that Altman and others fear. They knew this and pretended otherwise anyways. Pretty cut and dry in my opinion.
>No doubt they have their eyes set on regulatory capture too
Sam Altman has already made the rounds to argue for exactly this. Fucking crook.
>It's pretty blatantly obvious that all of the hand-wringing over AI safety was an excuse for their pivot into closing off and monetizing everything.
The playbook was "appease one side of the political aisle as much as possible to minimize the chance bipartisan action gets them shut down Napster-style" (which is still a massive hole in their business model, for obvious reasons I should hope).
Censoring the model so it only outputs progressive-approved content appears to have been effective, at least for the moment.
Seems even OpenAI can't resist the massive amount of money to be made in autogenerated smut. They've probably seen the huge popularity of their less "morally scrupulous" competitors and decided they want a piece of that pie.