Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was trying to be generous! But you are of course right, in the better analogy Hawking would find himself born into a feline-centric world and face an even harder task.


He’d have to be able to communicate with the cat for it to be a worthwhile metaphor to the ai in a box case and humans.

If he can do that it requires one weak link human over an infinite time horizon.

Humans are already terrible at preventing breaches in a lot dumber circumstances without something smarter than them trying to do it.


You're kind of falling into the fallacy this point is criticizing. You can't assume anything about communication between entities that are orders of magnitude apart in intelligence.

I have a profound understanding of my puppy's psychology, motivations, and capabilities; I even exercise complete control over her physical environment, and yet she ate my fitbit strap (again, goddamit!) as I was typing this very comment.


You're falling into the trap that one specifically constructed case of a less intelligent being not being in control of the more intelligent being somehow clearly argues the less intelligent can fight back and things will always be OK.

It's like arguing that humans with all their intelligence are still useless in controlling the situation when confronted with a Tiger in the wild. So clearly the tigers should relax and stop thinking about human-alignment. Tigers can always just switch off the smart humans if they were to try something. The part where this falls apart is "always."

It is terrifying to think that we might one day be puppies to AI as puppies are to us. It is not reassuring at all that we might be able to make things harder for the AI in question every now and again by nibbling at them.

Same goes for the Emu war. I'd like to not be the Emu one day to AI. Even if the AI struggles completely and hilariously to control me, there's a huge power differential here. I'd much rather be the human failing to kill Emus and face my own embarrassment than the Emu going guerilla-style fighting for their life.

I don't get why so few have pointed out why these specific arguments for "don't sweat AGI risk" are so weak.


You can guess that if it’s a human made interface it’s reasonable to predict we’d build it so we can communicate with it and/or give instructions (your puppy didn’t make you). It doesn’t appear out of nowhere. Current capabilities look like this.

The delta between a puppy and a human is also a lot smaller than a human and a super intelligent AGI and humans can effectively train dogs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: