Is "network analysis" a better term for what this article describes? Seriously, if taking a deep, hard look at connections among alleged drug dealers
and gang members is bad, what would be better?
I think the issue is whether the improvements in both deep analysis and real-time situational awareness constitute an infringement on personal liberty.
My understanding of the Palantir law enforcement use case is that, for example, a gang unit police cruiser going down the street in a bad neighborhood can pick up data from its license plate readers, and be automatically alerted if any passing vehicles are associated with certain parameters.
The plain-vanilla tech solution for law enforcement would be something along the lines of: throw a notification if any of the passing vehicles is registered to someone with outstanding warrants.
Palantir takes it a step further and throws a notification if, say, the vehicle is registered to a known associate of a gang member who is wanted for questioning. This network understanding has to be built on the backend, both by integrating existing data sources and by doing work directly on the Palantir system.
Eventually the cops might start following the vehicle, and pull the driver over for a nominal traffic infraction. At that point, the officer is going to use their Palantir app within their department-issued mobile phone to assist in the discussion with the driver -- for example, if the driver coughs up any names, the officer can type them in to see what comes up in the system.
I'd think that Palantir's defense of this would be something along the lines of, "everyone in this story previously had the right to see every bit of information the system provided, we just made it more efficient". The counterargument would be something along the lines of, the state having such detailed understanding of its citizens' personal lives is ipso facto tyranny.
The ability of the government to temporarily detain and question essentially anyone at any time through traffic stops for minor violations seems like a bigger problem. It seems weird to try and restrain abuse of that power by keeping the police under-informed rather than just eliminating that power directly.
Why do you assume this is informing police? Imagine police see your license plate and the Palantir app shows up and says "crime risk score: 83" and then the police manufacture a reason to pull you over. Are the police more informed, or are they just working for Palantir now?
Limiting police power is a tough political goal. Right now police can literally shoot you dozens of times with their personal assault rifle that has "fuck you" engraved on it while you are lying on the ground with your hands on your head begging for your life, then get caught planting a gun on your body and face zero repercussions. And in discussions about this a huge number of people support the cops - "you never know, he could have been armed, we can't ever suggest a cop behaved badly because that would make other cops less willing to kill at a moment's notice" are real arguments people put forward. If you want to stop them from being able to pull people over for no reason so easily that sounds great but that is a HUGE undertaking.
There are defenses against this, but they're not reasonably in grasp for most people. If I start having a police officer interrogate me about something outside of a traffic stop:
1. I know enough to stop answering questions
2. I have a lawyer I can call and ask to get involved
Both 1 and 2 are, however, not choices for the vast majority of people.
Your two points are really good. Before you're under arrest, you may want to try #1, but what may ensue is failure to obey orders, or resistance to arrest, or disorderly conduct. Don't ask me why or how, it just kind of happens and before you know it you're under arrest. Now that you're under arrest, your 5 minute traffic stop transformed itself into an ordeal. For some people it's worth it, for others they just want to avoid the hassle, and you can't blame them for not asserting their rights.
Cameras in your vehicle and handing your attorney's business card with your driver's license are good deterrents that I highly recommend. Everyone has a plan until they get punched in the mouth.
Your point here is correct. A very common tactic is for police to make a wrong claim - which is legal - such as "If you don't cooperate with me, I'm going to arrest you and charge you with X". The police are absolutely allowed to lie to you, including about the law. Likewise, they will use your reluctance to cause a fracas against you to extract easy answers.
They may even, as you say above, move to detain you and bring you in. Again, actually having a lawyer here is what's important, and again, usually not something most people can afford.
The word "alleged" certainly concerns me w.r.t state surveillance, especially in the hands of law enforcement.
Connecting dots between convicted criminals is one thing, predicting crime based on associations with people who are not criminals is a whole other ballgame.
A predictive model isn’t always trying to predict the future. It can also be used to predict the present—to find hidden connections in (perhaps maliciously) incomplete data.
In this case, that’s to say: if an unnamed gang member committed a crime, but all the known members of said gang have alibis... maybe you should be looking for an unknown member of said gang. And who’s likely to be secretly in a gang? Well, someone whose friends are all in said gang would be a good first guess.
What about outreach workers? How many of them are going to have connections to junkies, homeless people and gang members?
Sincere good Christians often believe in trying to help the unfortunate. Some good Christians spend all their time surrounded by clean living church goers. Others spend a whole lot of time surrounded by folks who are from the wrong side of town and the wrong side of the law.
The point of these analyses isn’t to say “arrest this man he’s a criminal” but rather “closely observe this man and ascertain whether he’s a gangbanger or an outreach worker.” Of course it doesn’t even need to get that far since it’s trivial to tell the two apart from the network analysis alone (the outreach worker will have a far higher ratio of legitimate-world contacts to criminal-world contacts than the gangbanger).
And you don't think law enforcement agencies will quickly make the leap from "closely observe" to "harass and arrest for any tiny transgression in an effort to coerce a confession"?
Exactly. Instead of doing their job manually, or double- checking system output, they begin to take what the software package says as truth. Whether it’s just human nature, or overwork, or whatever, it’ll happen.
Because Law Enforcement generally behaves like people, and (like people), often take cognitive short-cuts when making certain assessments. In this case, the short-cut would be to assume that the individual has to be a criminal, and act as such.
Now, there are step that you can take to reduce these assumptions, but those are steps outside of the program, and would have to be introduced in tandem with the introduction of the program. Unfortunately, policy-makers often assume that any single idea that is funded (i.e. buying a predictive model from Palantir) is a comprehensive solution to the problem (often encouraged by the sales reps peddling whatever the 'solution' is) and fail to recognize that other programs will have to be funded alongside in order to deploy the program effectively.
If the software says Bob probably did it, Johnny Law will assume Bob did it and stop looking elsewhere, precisely because Johnny Law wants to close the case. Throw an overzealous DA in the mix, and it sucks to be Bob.
That's just not how it works. Eventually you have to present actual evidence in an actual court and it won't be "this software you never heard of said so".
The word ’predict’ gets used a lot and is sufficiently charged that it causes much Minority Report-style angst and gnashing of teeth, but really a better term would be ’uncover’. The whole investigative process is about uncovering concealed connections between people and events, so this is just a new technique. It isn’t inherently Orwellian either: there’s still a wide chasm of human investigators, prosecutors, judges, and juries between any kind of ’positive’ and actual consequences.
Well first of all, any time a private corporation does secret private proprietary unreviewable analysis on members of the public which is then used by law enforcement -- this is a terrifying, horrible dystopian miscarriage of justice on its own. People (i.e. the politicians who signed off on this) should go to jail for doing this. Police should be accountable to the public, allowing them to dodge this by using private entities who are not accountable to do their dirty work is quite simply criminal corruption.
Add on top of that it's being done by a company named after a mass surveillance device used for evil in a fantasy story.
And on top of that it's being done by a Thiel company. Thiel, who is nearly a perfect personification of evil: he has made very explicit candid public statements on how he opposes the idea of democracy itself, he does not think women should have the right to vote.
And on top of that it was being done without the knowledge or consent of nearly anyone in the city.
And finally, the justice system presumes innocence. The Palantir system does the opposite -- it makes wild, arbitrary untrackable inferences that suggest guilt for people without any real evidence. It is quite literally nothing more than a very thin shield police can use to justify harassing and intimidating the "kind of people who tend to be criminals", which in this case is not that at all -- it is "the kind of people who tend to get caught and prosecuted for crimes" i.e. only violent or drug crimes (except drugs white people use), only poor criminals, only minority criminals.
Where's the massive computer analysis system that looks for wage theft committed by employers? This is after all how the vast majority of wealth is stolen in the US, citizens could recover billions of their own money if it were stopped. Where's the computer system working with law enforcement to automatically detect any insider trading? Why don't we monitor the behavior of people in finance to detect cocaine use and then send in the SWAT teams? What about a computer system that detects bad prosecutors?
You are misreading his comments about women and democracy. He was simply saying that certain demographics aren't receptive to libertarians - he was not saying that women shouldn't have the right to vote and that democracy is bad. I daresay it's impossible to read his full writing in question and come to the conclusion one might get from simply reading an excerpt of two sentences that the Politico article wants you to see.
" It would be absurd to suggest that women’s votes will
be taken away or that this would solve the political
problems that vex us."
"I believe that politics is way too intense. That’s why
I’m a libertarian. Politics gets people angry, destroys
relationships, and polarizes peoples’ vision: the world
is us versus them; good people versus the other."
His essay here is pretty blunt about how he is opposed to democracy. He outlines how to escape democracy (he calls it politics sometimes in the essay and uses the two words essentially interchangeably, because they are the same). He outlines ways to escape from democratic governments, seasteading, techno-libertarian cyberspace, etc, where capital can rule uninfringed by the desires, needs, interests, votes of its subjects.
I was incorrect about his views on women, though, thank you for the correction I retract that completely. He simply sees women voting as a problem because they don't vote the way he wants (they are not by and large wealthy and powerful so do not vote exclusively for the interests of these constituencies, like Thiel) -- and uses this as an excuse to throw out voting altogether. I'm not sure that that's better but my characterization was inaccurate.
Mostly sad. Sure there is truth that your peers affect your decisions and options in life. Being branded for arrest or harassment by the police does not help...
I'm going to talk about a counter case that happened in real life, where a journalist was put on a "kill list" by the US Government. If you haven't read the entire [Drone Papers](https://theintercept.com/drone-papers/) I highly recommend doing so.
The government basically had a naive likelihood analysis program that attempted to group terrorists. If someone went to the same places as terrorists, in about the same schedule, then the system came to the conclusion they were a terrorist. In programming we use the duck metaphor - "if it walks like a duck and quacks like a duck, it's probably a duck."
What occurred was that a journalist who was interviewing these terrorists ended up on the kill list. In theory the human element was supposed to filter out such things - and I'm sure that's what the developers of the system intentioned. It's also clear the human operators were simply rubber stamping everything the system produced, and allowed a kill order on a journalist to go out.
And if you're discussing evaluating types, then our duck solution is just fine. If you're discussing firing hellfire missiles at unarmed people, then we probably need to take a step back. Predictive algorithms and network analysis are naive - they're fine for recognizing pictures of dogs and cats, or pinging a friend when you upload a photo with their face. They're not sufficient to apply to policing or resourcing, especially since algorithms display the exact same bias as their input data.
System designers, including myself, often expect human elements to act as reliably as algorithmic components. The truth is that humans are lazy, biased and easily prone to pavlovian conditioning (The system showed me a dialog, when I click OK the thing I want to happen happens.)
The other truth is that frequently these systems, even if they are honest - they show likelihood percents, their flaws have been explained to their operators, they were made with the best available data - are too complicated for most people to grasp. Most folks can't schedule a meeting across five calendars without assistance or get email marketing correct. Do we really expect them to grasp the nuances of a statistical network analysis model and correctly interpret the results on a repeated basis?
The solution is that we need to stop pretending an algorithm is a replacement for human decision making in these life or death cases.