> there is a future where the human has given AI control of things, with good intention, and the AI has become the threat
As in, for example, self-driving cars being given more autonomy than their reliability justifies? The answer to that is simple: don't do that. (I'm also not sure all such things are being done "with good intention".)
This is also the answer to over-eating, and to the dangers of sticking your hands in heavy machinery while it's running.
And yet there's an obesity problem in many nations, and health-and-safety rules are written in blood.
When you say up-thread is, in itself, correct:
> I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.
Trouble is, we don't know how to do minimise the damage that bad acting humans can do with a tool that can do the thinking for them. Or even if we can. And that's assuming nobody is dumb enough to put the tool into a loop, give it some money, and leave it unsupervised.
Firstly, "don't do that" probably requires some "control" over AI in the respect of how it's used and rolled out. Secondly, I find it hard to believe that rolling out self driving cars was a play by bad actors, there was a perceived improvement to the driving experience in exchange for money, feels pretty straight forward to me. I'm not in disagreement that it was premature though.
As in, for example, self-driving cars being given more autonomy than their reliability justifies? The answer to that is simple: don't do that. (I'm also not sure all such things are being done "with good intention".)