I agree with that to an extent, but how far should the AI model developers go with that? Like if I ask for advice on, let's say, making custom chef's knives then should the AI give me advice not to stab people? Who decides where to draw the line?
We should all get to decide, collectively. That's how society works, even if imperfectly.
Someone died who didn't have to. I don't think it's specifically OpenAI's or ChatGPT's fault that he died, but they could have done more to direct him toward getting help, and could have stopped answering questions about how to commit suicide.
How would we decide, collectively? Because currently, that’s what we have done. We have elected the people currently regulating (or not regulating) AI.