(IMO) AI cannot murder people. The responsibility of what an AI does falls on the person who deployed it, and to a lesser extent the person who created it. If someone is killed by a fully autonomous weapon then that person has been murdered by the person or people who created and enabled the AI, not the AI itself.
This is no different to saying a person with a gun murdered someone rather than attributing the murder to the gun. An AI gun is just a really fancy gun.
There will come a time where complex systems can better be predicted with the use of AI than with mathematical predictions. One use-case could be, feeding body scans into them for cancer prevention. AFAIK this is already researched.
There may come a time where we grow so accustomed to this, that the decision is so heavily influenced by AI, that we believe it more than human decisions.
And then it can very well kill a human through misdiagnostic.
I think it is important to not just put this thought aside, but to evaluate all risks.
> And then it can very well kill a home through misdiagnosis.
I would imagine outcomes would be scrutinized heavily for an application like this. There is a difference between a margin of error (existing with human doctors as well) and a sentient ai that has decided to kill, which is what it sounds like you're describing.
If we didn't give it that goal, how does it obtain it otherwise?
Except that with a gun, you have a binary input (the trigger) so you can squarely blame a human for misunderstanding what they did when they accidentally shot someone on the grounds that the trigger didnt work.
The mass murder of Palestinians is already partially blamed or credited to an "AI" system that could identify people. Humans spent seconds reviewing the outcome. This is the reality of AI already being used to assist in killing. AI can't take the blame legally speaking, but it makes it easier to make the call and sleep at night. "I didn't order a strike on this person and their family of eight, the AI system marked this subject as a high risk, high value target". Computer-assisted dehumanization. (Not even necessarily AI)
> This is no different to saying a person with a gun murdered someone rather than attributing the murder to the gun.
And “guns don’t kill people, people kill people”¹ is a bad argument created by the people who benefit from the proliferation of guns, so it’s very weird that you’re using that as if it were a valid argument. It isn’t. It’s baffling anyone still has to make this point: easy access and availability of guns makes them more likely to be used. A gun which does not exist is a gun which cannot be used by a person to murder another.
It’s also worth nothing the exact words of the person you’re responding to (emphasis mine):
> It can also murder people, and it will continue being used for that.
Being used. As in, they’re not saying that AI kills on its own, but that it’s used for it. Presumably by people. Which doesn’t contradict your point.
We also choose to have cars, which cause a certain amount of death. It's an acceptable tradeoff (which most don't think about much). I'd speculate that it's mostly people who don't use cars who criticize them the most, and the same with guns.
That’s an absurd comparison, to the point I’ having trouble believing you’re arguing in good faith. The goal of cars is transportation; the goal of guns is harm. Cars causing deaths are accidents; guns causing deaths is them working as designed. Cars continue to be improved to cause fewer fatalities; guns are improved to cause more.
> I'd speculate that it's mostly people who don't use cars who criticize them the most, and the same with guns.
You mean that people who are opposed to something refuse to partake in its use and promotion? Shocker.
Yes, but a person wielding a knife has morals, a conscience and a choice, the fear is that an AI model does not. A lot of killer AI science fiction boils down to "it is optimal and logical that humanity needs to be exterminated"; no morality or conscience involved.
Which is why there are laws around what knives are allowed and what are banned. Or how we design knifes to be secure. Or how we have a common understanding what we do with knifes - and what not. Such as not giving them to toddlers... So what's your point?
The point is not the tool but how it's used. "What knives are allowed" is a moot point because a butter knife or letter opener can be used to kill someone.
But if you give a very sharp knife to a toddler and say "go on, have fun" and walk off, you're probably going to face child endangerment charges at some point.