Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, today at least. But there is a future where the human has given AI control of things, with good intention, and the AI has become the threat.

AI is a tool today, tomorrow AI is calling shots in many domains. It's worth planning for tomorrow.



A good analogy might be a shareholder corporation: each one began as a tool of human agency, and yet a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.

The more AI/ML is woven into our infrastructure and economy, the less it will be possible to find an "off switch", anymore than we can (realistically) find an off switch for Walmart, Amazon, etc.


> a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.

No, the corporation has an agency that is a tool of particular humans who are using it. Those humans could be shareholders, employees, or board members; but in any case they will have some claim to be acting for the corporation. But it's still human actions. Corporations can't do anything unless humans acting for them do it.


Any instance of an individual person, at any level, deviating from the mandate of the corporate machine is eventually removed from the machine. A CEO who puts the environment before profit, without tricking the machine into thinking that it's a profit-generating marketing move; an engineer refusing to implement a feature they feel is unethical; a call center employee deviating too long from script to help a customer.

All are human actions. "Against corporate policy." Go ahead, exercise your free will. As a shareholder, an employee, hell as CEO. You will find out how much control a human has.


It's still humans who make the decision to let “AI” call the shots.


Sure, but that's the gist of AI X-risk: this is one of those few truly irreversible decisions. We have one shot at it, and if we get it wrong, it's game over.

Note that it may not be immediately apparent we got it wrong. Think of a turkey on a stereotypical small American farm. It will itself living a happy and safe life under protection of its loving Human, until one day, for some reason that's completely incomprehensible to the turkey, the loving Human comes and chops its head off.


Hence "with good intentions".


> there is a future where the human has given AI control of things, with good intention, and the AI has become the threat

As in, for example, self-driving cars being given more autonomy than their reliability justifies? The answer to that is simple: don't do that. (I'm also not sure all such things are being done "with good intention".)


> The answer to that is simple: don't do that.

This is also the answer to over-eating, and to the dangers of sticking your hands in heavy machinery while it's running.

And yet there's an obesity problem in many nations, and health-and-safety rules are written in blood.

When you say up-thread is, in itself, correct:

> I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.

Trouble is, we don't know how to do minimise the damage that bad acting humans can do with a tool that can do the thinking for them. Or even if we can. And that's assuming nobody is dumb enough to put the tool into a loop, give it some money, and leave it unsupervised.


Firstly, "don't do that" probably requires some "control" over AI in the respect of how it's used and rolled out. Secondly, I find it hard to believe that rolling out self driving cars was a play by bad actors, there was a perceived improvement to the driving experience in exchange for money, feels pretty straight forward to me. I'm not in disagreement that it was premature though.


I'd rather address our reality than plan for someone's preferred sci-fi story. We're utterly ignorant of tomorrow's tech. Let's solve what we know is happening before we go tilting at windmills.


WHY on earth would we let "AI systems" we don't understand control powerful things we care about. We should criticize the human, politician, or organization that enabled that


Why? Because the man-made horrors beyond mortal comprehension seem to bring in the money, so far. Because the society we're in is used to mere compensation and prison time being suitable results from poor decisions leading to automations exploding in people's faces (literally or metaphorically), not things that can eat everyone.

And then there's the cases of hubris where people only imagine they understand the powerful thing, but they don't, like Chernobyl exploding and basically every time someone is hacked or defrauded.


WHY on earth would a frog get boiled if you slowly increased the temperature?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: