Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think of it more like "Corporation X making whole species extinct because it has better uses for rainforests" not like "Skynet hates people". But the end result is the same.

Humanity doesn't hate polar bears. We just love burning oil more than we love them. Polar bears die as an unintended side effect. We might even be sad about that. We might even keep a few alive in ZOOs. But we won't change our whole economy to save a few cute bears.

Of course these kinds of threats will start to appear only when it's smart enough. The main problem with AGI is that we probably won't know until it's too late, because of how fast this technology develops once it can improve itself.

Can I ask you another question - what do you think will happen?

1. it won't get smarter than us 2. it will care about us 3. we will somehow keep it in check despite the fact it's smarter than us

Cause it seems to me that most people still intuitively think (1) because it's "too sci-fi". And even if they became persuaded that (1) is no longer certain - they didn't updated the rest of their beliefs with that new information, they are still believing AI is safe like they did when (1) was assumed true because they haven't updated the cache, or maybe they don't even realize there's dependency somewhere in their train of thought that needs to be updated.

This is how living in a world undergoing singularity will be like, BTW - you can't think one thought to the end without realizing the assumptions might have changed since the last time you thought about it. So you go down and realize the assumptions there are also changing. And so on.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: