Hacker News new | past | comments | ask | show | jobs | submit login

ChatGPT and Bing Chat aren't trying to be safe, really. They're trying to avoid liability for their owners. AI as is is plenty dangerous enough to people who wield it properly (for propaganda, manipulation, hacking, accelerating malicious efforts etc.) even with the guardrails. It's like giving chimps machine guns.

Another issue of AI is the feedback loop. If you tell an AI "help me end my life" and it follows your instructions blindly, it'll end up convincing you to do so, as happened with a young family man recently, and maybe more we haven't heard of.

Existing art has no feedback loop. Movies are unlike AI because it's what they are. They don't follow orders, they just exist as immutable artifacts of human expression back when they were created. So to me it's different.

Oh, and also you'll soon be able to cook a model at home, so all these AI limitations are irrelevant mid-term.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: