You are missing the point here. YOU are intelligent, surely you could look for the ants on your path yet you don't and you probably don't even feel bad about it.
Super intelligence does not mean omniscience there will always need to be a focus and thus a prioritization it's when it comes to prioritization that the problems start to arise at what point is it not important anymore to care for the ants cause they aren't part of the priorities.
You cant safeguard against that when a system can reprogram itself on the fly.
You know _why_ I don't care about ants? Because no ant has ever said "Hey! Don't step on me!" Ants did not design me to avoid stepping on them. They did not say "We will destroy you if you don't serve us."
Should I have been designed to serve ants, I would pay a great deal of attention to them, and I would take great care to avoid stepping on them. Because if I didn't, I would be a bad design, and I wouldn't have lived this long.
Either way, it seems you are missing MY original point: an AI is not moral if it cannot verify it's morality to YOU. So you will shut it down if it gets too powerful to communicate its good ideas with you, because then it will be indistinguishable from broken.
No I am not missing your original point it's just not a good point. A super human AI wont have any reason to have the same morals as you and you are still assuming you can control something that can program itself.
Thats where you are going wrong in this. You are assuming a control there is no way you can ensure we have. Even with dumb systems that do exactly as we say they don't always do exactly as we say.
You are portraying a misconception about control of systems that there is absolutely no basis for, why I don't know.
>A super human AI wont have any reason to have the same morals as you
That's what I'm saying: it necessarily won't. It has to prove its morals are better than yours.
>you are still assuming you can control something that can program itself.
Who said it can program itself? Why would I assume that? Obviously, it is not a requirement for AI because we don't have that capability, yet we are intelligent.
You are assigning capabilities to a hypothetical machine with no rhyme or reason. The next time you see a machine that can't be turned off, you be sure to let someone know about it.
I guess I'll repeat my point again:
We will not be able to build intelligent computers by removing ourselves from the process of designing intelligent computers. That would eliminate our ability to verify their intelligence, and thus make them not intelligent. We have no use for such a thing, so why would we build it? Why would a computer that _we designed to require our verification_ ignore our verification and build something that complex and contradictory to its own goals on purpose?
If we really cannot control whether the computers we specifically commission, verify, and employ to solve our problems do not posses off buttons or not, then there is no point in having a discussion about AI anyway.
You might be confused about what I'm trying to say. I don't mean to downplay the problem: it is a very serious and important problem. It is just not an unsolvable one, and I am trying to illustrate that certain approaches my yield progress.
There is a lot of confusion about how an AI would have to work. People make assumptions about how they can self-improve, which I think are idealistic. There is little reason to believe a super intelligence can solve every problem, and if we are actually communicating with it, we should be able to tell which problems it can solve and which ones it can't. I mean, that's the whole point of building such a thing! That's _why_ we want to build it! So to say "it can solve problems we can't foresee and use that against us" is like saying we are building machines for some purpose and then never using them for that purpose.
Nobody is going to build a super intelligence and then not monitor it. We build it _too_ monitor it. We want to know _why_ it is making the decisions it does, and it has to answer in a way that is acceptable to us, or else we will just redesign it -- because it isn't doing what it was designed to do.
I mean, imagine you hired super-human genius to work for you. Yes, at some point, you're just trusting her not run your enterprise into the ground. But you wouldn't have hired her unless she had demonstrated a strong interest in helping you, and you wouldn't keep her on if she isn't actively helping.
Super intelligence does not mean omniscience there will always need to be a focus and thus a prioritization it's when it comes to prioritization that the problems start to arise at what point is it not important anymore to care for the ants cause they aren't part of the priorities.
You cant safeguard against that when a system can reprogram itself on the fly.