Hacker News new | past | comments | ask | show | jobs | submit login

There are a couple differences between an AI working towards something harmful and a human doing it:

- If an AI can self-replicate or otherwise scale itself up, it can work on something many times in parallel. One billion AIs working on a deadly virus is different from one rogue scientist working on one.

- On that note, if an AI replicated enough, it could become impossible to catch/stop. A single human can be hard to catch, but we can usually catch them.

- Most humans are deterred from doing harmful things by the threat of incarceration, death, social isolation, the values they have, etc. An AI may not have any of those, and so could act more brazenly.

- Potentially, an AI could be better at certain tasks than a human. Maybe ChatGPT turns out to be a very effective social engineer, or very effective propagandist. I don't think we really know what the capabilities are.

All of these are why I think it's important to ask the question: what would it try to do, and what could it do, if it were let loose?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: