Yeah, I tend to agree. A lot of the talk about "AI safety" is premised on the idea that LLMs are godlike genies that need to be persuaded to only do good things for good people (and we're the good people!). In reality, LLMs are (for now at least) just tools. Within certain limits, it shouldn't be up to a tool manufacturer to decide who is a good person using a tool for a good reason and who is a bad person using it for bad purposes. Obviously, there are limits and you shouldn't sell a gun to an angry drunk, but there should be pretty broad discretion to blame abuse on the abuser and not the tool merchant unless the abuse is a predictable outcome of the interaction.