> The only sensible model of "alignment" is "model is aligned to the user",
We have already seen that users can become emotionally attached to chat bots. Now imagine if the ToS is "do whatever you want".
Automated cat fishing, fully automated girlfriend scams. How about online chat rooms for gambling where half the "users" chatting are actually AI bots slowly convincing people to spend even more money? Take any online mobile game that is clan based, now some of the clan members are actually chatbots encouraging the humans to spend more money to "keep up".
LLMs absolutely need some restrictions on their use.
> chatbots encouraging the humans to spend more money ... LLMs absolutely need some restrictions on their use.
No, I can honestly say that I do not lose any sleep over this, and I think it's pretty weird that you do. Humans have been fending off human advertisers and scammers since the dawn of the species. We're better at it than you account for.
In 2022, reported consumer losses to fraud totaled $8.8 billion — a 30 percent increase from 2021, according to the most recent data from the Federal Trade Commission. The biggest losses were to investment scams, including cryptocurrency schemes, which cost people more than $3.8 billion, double the amount in 2021.
Furthermore "If it were measured as a country, then cybercrime — which is predicted to inflict damages totaling $6 trillion USD globally in 2021 — would be the world’s third-largest economy after the U.S. and China."
Governments and laws are reactive, new laws are passed after harm has already been done. Even then, even in governments with low levels of corruption, laws may not get passed if there is significant pushback from entrenched industries who benefit from harm done to the public.
Gacha/paid loot box mechanics are a great example of this. They are user hostile and serve no purpose other than to be addictive.
Mobile apps already employ slews of psychological modeling of individual user's behavior to try and manipulate people into paying money. Freemium games are infamous for letting you win and win, and then suddenly not, and slowly on ramping users into paying to win, with the game's difficulty adapting to individual users to maximize $ return. There are no laws against that, and the way things are going, there won't ever be.
I guess what I'm saying is that sometimes the law lags (far) behind reality, and having some companies go "actually, don't use our technology for evil" is better than the alternative of, well, technology being used for evil.
We have already seen that users can become emotionally attached to chat bots. Now imagine if the ToS is "do whatever you want".
Automated cat fishing, fully automated girlfriend scams. How about online chat rooms for gambling where half the "users" chatting are actually AI bots slowly convincing people to spend even more money? Take any online mobile game that is clan based, now some of the clan members are actually chatbots encouraging the humans to spend more money to "keep up".
LLMs absolutely need some restrictions on their use.