Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m not a very conspiracy minded person, and this comment is mostly aimed at the Sam Altman’s of the world, but when people talk about AI harms, especially harms in the “risk to all human life” category, I’m increasingly convinced that it’s an intentional misdirect away from the very real harms that are happening in front of us - right now.

The harm conversation needs to be refocused on these less sexy but nevertheless real emerging problems.

As these tools make their way into more and more aspects of life, I can’t help but feel like new laws need to exist so that a “don’t use this for xyz high risk purpose” warning actually has teeth.



In this specific example, police coercing or using shaky evidence in a perp lineup has always been known and a big problem and led to a lot of false convictions. This just has the added wrinkle of "AI" giving it more credibility than it should. You can see, even in this story, the victim tried to say he wasn't really sure and they basically ignored him. They aren't trying to be "right" or catch the right guy. Someone goes to jail, solved. If you're wrong, let the courts shake it out. You can also probably make more insidious assumptions as to the type of people that typically end up in jail from this stuff and all the perverse incentives there.


> but when people talk about AI harms, especially harms in the “risk to all human life” category, I’m increasingly convinced that it’s an intentional misdirect away from the very real harms that are happening in front of us - right now.

I'm not sure that it's _intentional_; a lot of people are in deep into the "superhuman AIs" thing, in a quasi-religious way. But certainly the approach to "AI safety" is far too much "what if a sci-fi thing happens?" and not enough "what if people take the output of the spicy autocomplete seriously?" It's really mostly a human factors problem, at least for now; these things are being used completely inappropriately (and of course the companies making them have an interest in that; constraining them to appropriate low-risk uses would make them useful approximately nowhere).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: