Hacker News new | past | comments | ask | show | jobs | submit login

> refuse to allow AI to insult (or offend) any person but would be okay if AI took part of killing them

Liability and regulatory scrutiny are factors. They’re liable about offensive speech but military use cases are an effective shield against liability given that deaths are expected.




>They’re liable about offensive speech

there's nothing preventing them from implementing a toggleable filter, with a big red warning, geo-restrictions and even age verification if you want to turn it off.

it's not legal liability that prevents them from doing that, it's ideology.


> there's nothing preventing them from implementing a toggleable filter

Which would not placate regulators in all likelihood. Close regulatory scrutiny is a big problem for companies seeking to grow, and a pesky cost for big companies.


that would make sense if Anthropic didn't spend the past two years hyping up retarded parrots as literally scifi skynet bullshit, begging the nanny state to step in and regoolate the shit out of retarded parrots industry before the unthinkable happens and a retarded parrot escapes the confinement to destroy the democracy




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: