ChatGPT isn't a legal entity but OpenAI is, and Altman has already recommend to Congress that coming regulations should make AI companies liable for produced text and be 230 exempt.
I can see it already happening even without legislation, 230 shields liability from user-generated content but ChatGPT output isn't user generated. It's not even a recommendation algorithm steering you into other users' content telling why you should kill yourself - the company itself produced the content. If I was a judge or justice that would be cut and dry to me.
Companies with AI models need to treat the models as if they were an employee. If your employee starts giving confidently bad legal advice to customers, you need to nip that in the bud or you're going to have a lot of problems.
Why should OpenAI be more liable for a tool that they've created than any other tool creator where the tool is intentionally misused and warnings on the tool ignored?
If I wrote text in Microsoft Word and in doing so, I had a typo in (for example) the name of a drug that Word corrected to something that was incorrect, is Microsoft liable for the use of autocorrect?
If I was copying and pasting data into excel and some of it was interpreted as a date rather than some other data format resulting in an incorrect calculation that I didn't check at the end, is Microsoft again liable for that?
At the bottom of the ChatGPT page, there's the text:
ChatGPT may produce inaccurate information about people, places, or facts.
If I can make an instance of Eliza say obscene or incorrect things, does that make the estate of Weizenbaum liable?
ChatGPT is qualitatively different from any tool, like Microsoft Word. To suggest they are equivalent is so asinine as to not even warrant entertaining the idea.
A sophisticated word processor corrects your typos and grammar, a primitive language model by accident persuades you to kill yourself. Sam Altman, Christina Montgomery and Gary Marcus all testified to Congresy that Section 230 does not apply to their platforms. That will be extremely hard to defend when it eventually comes in front of a federal judge.
>Why should OpenAI be more liable for a tool that they've created than any other tool creator where the tool is intentionally misused and warnings on the tool ignored?
I can see it already happening even without legislation, 230 shields liability from user-generated content but ChatGPT output isn't user generated. It's not even a recommendation algorithm steering you into other users' content telling why you should kill yourself - the company itself produced the content. If I was a judge or justice that would be cut and dry to me.
Companies with AI models need to treat the models as if they were an employee. If your employee starts giving confidently bad legal advice to customers, you need to nip that in the bud or you're going to have a lot of problems.