Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm going to talk about a counter case that happened in real life, where a journalist was put on a "kill list" by the US Government. If you haven't read the entire [Drone Papers](https://theintercept.com/drone-papers/) I highly recommend doing so.

The government basically had a naive likelihood analysis program that attempted to group terrorists. If someone went to the same places as terrorists, in about the same schedule, then the system came to the conclusion they were a terrorist. In programming we use the duck metaphor - "if it walks like a duck and quacks like a duck, it's probably a duck."

What occurred was that a journalist who was interviewing these terrorists ended up on the kill list. In theory the human element was supposed to filter out such things - and I'm sure that's what the developers of the system intentioned. It's also clear the human operators were simply rubber stamping everything the system produced, and allowed a kill order on a journalist to go out.

And if you're discussing evaluating types, then our duck solution is just fine. If you're discussing firing hellfire missiles at unarmed people, then we probably need to take a step back. Predictive algorithms and network analysis are naive - they're fine for recognizing pictures of dogs and cats, or pinging a friend when you upload a photo with their face. They're not sufficient to apply to policing or resourcing, especially since algorithms display the exact same bias as their input data.

System designers, including myself, often expect human elements to act as reliably as algorithmic components. The truth is that humans are lazy, biased and easily prone to pavlovian conditioning (The system showed me a dialog, when I click OK the thing I want to happen happens.)

The other truth is that frequently these systems, even if they are honest - they show likelihood percents, their flaws have been explained to their operators, they were made with the best available data - are too complicated for most people to grasp. Most folks can't schedule a meeting across five calendars without assistance or get email marketing correct. Do we really expect them to grasp the nuances of a statistical network analysis model and correctly interpret the results on a repeated basis?

The solution is that we need to stop pretending an algorithm is a replacement for human decision making in these life or death cases.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: