A lot of this discussion reminds me of the book Blindsight.
Something doesn't have to be conscious or intelligent to harm us. Simulating those things effectively can be almost indistinguishable from a conscious being trying to harm us.
I never asserted that they couldn't do harm. I asserted that they don't think, and therefore cannot intend to do harm. They have no intentions whatsoever.
If a person causes harm, we care a lot. We make the distinction between manslaughter, first and second degree murder, as well as adding hate crimes penalties on top if the victim was chosen for a specific set of recognized reasons. ML models aren't AGI, so it's not clear how we'd apply it, but there's precedent for it mattering.
Something doesn't have to be conscious or intelligent to harm us. Simulating those things effectively can be almost indistinguishable from a conscious being trying to harm us.