We are already there! Humans misrepresent data on purpose in the media all the time in order to create misinformation and sway opinions. If anything AI creates deniability, "Oh our so and so used gpt, woops!"...
>We are already there! Humans misrepresent data on purpose in the media all the time in order to create misinformation and sway opinions.
That isn't really the same thing. I was thinking of something more along the lines of "AI designed car randomly explodes" or "AI doctors misdiagnose patients and mix up prescriptions." The AI equivalent of the Therac-25[0]. Something a lot worse than "sometimes the media lies," which we already know, and something explicitly the fault of AI confabulation and automating away humans at what should be necessary stopgap or validation steps for the sake of streamlining or cost-cutting.
>If anything AI creates deniability, "Oh our so and so used gpt, woops!"...
Luckily, that doesn't seem to be the case. In every story I've seen where people have tried to blame AI they didn't get away with it, because the AI is just a tool and the human beings are still legally responsible. People still try though.
You mean like AI assisted profiling of minorities? or AI assisted drone strikes? We are already there. Just not for the comfy cushy western peeps. Oh wait that republican self drove her tesla into a lake and died, so yeah- already there.