Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Used by police across the United States for almost a decade, predictive policing relies on algorithms to interpret police records, analyzing arrest or parole data to send officers to target chronic offenders, or identifying places where crime may occur.

The promise of the software made in a sales pitch or on the website, is a far cry from the reality of what such software delivers. That's fine when it's business, the cost of a mistake is a p&l hit. But here we are talking about people's lives, far stricter processes should be in place, more similar to getting approval for a new airplane or medicine. To be clear, predictive policing is replacing detectives/experts with software, like a kind of robo-policing, where decisions to investigate are generated by a system and handed to cheaper uniformed police who lack the context behind the decision support system they are now serving, because software becomes a black box.

Any consumer who has ever dealt with even state-of-the-art call centers, knows that humans do a far better job at real-world operations.

This is a premature application of software.




> This is a premature application of software.

maybe, but you have no idea yourself so your fear does not carry enough weight to carry that statement. It's basically like you are saying "it's too early to try Bayesian policing" when in fact Bayesian policing might be a great idea, and it was a great idea 10 years ago too.

Conditional probability, if there have been a lot of rapes in a 2 block radius, do you think it might be a good idea to look in that radius? If you try that and the crimewave continues but moves 2 blocks away, do you think there might be information to glean from that?

considering individual cops as bees in a hive, can individual bees make good Bayesian decisions about where to fly next? No, the information needs to be consolidated centrally. Who's better at Bayesian calculations, people or computers?

I'm sure there is all sorts of defective software out there, just like there are all sorts of other lazily/cheaply produced products that don't live up to our needs. That doesn't mean it's too early to try improvements.


Yea but you're ignoring biases in the data that are difficult or impossible to disentangle from the underlying effects. What happens when you see an area with 50% higher violent crime rate? Is that because there are a large number of crimes? Or is it because it's a neighborhood that is disproportionately targeted by police? How do you even attempt to control for that?

It's not that Bayesian stats can't help or can't offer insights, it's that it can easily reinforce underlying societal biases, and then it's harder to question because "it's science" and no one has to take any responsibility for its effects.


Our knowledge of crime rates doesn't come form arrest records, but rather victims surveys. Yes, we know that certain neighborhoods/populations have higher rates of crime and not just arrests.


> Or is it because it's a neighborhood that is disproportionately targeted by police?

I'd guess violent crime mostly "comes from" people calling 911, not from officers stumbling into rapes/mugging/murders/home invasions.


I was arguing that it's not "premature" to analyze crime and policing data statistically. If we include and add in your insights--and I think it's foolish to think crime experts haven't thought of them before, but if they haven't:

I would still argue that it's not "premature" to look at crime and policing data; in fact, I'd say it's high time we did it.

tl;dr you are not addressing what I was arguing

(and to everybody else here, I was not advocation Bayes Theorem, I was relying on it as a baseline truth for synecdoche for statistical analysis. If you want to object to Bayes Theorem, that's on you, it wasn't my point...)


no one is objecting to Bayes' Theorem here at all. I am directly addressing your argument: there is _inherent_ bias in policing data, and doing a Bayesian analysis will make it very easy to fool ourselves into trusting the results while also allowing everyone in the room to absolve themselves of taking any responsibility for the consequences of doing so. Pretending we know how to debias policing data _broadly_ (i.e. we know how to do it in every locale and in every instance that the data will actually be _used by practicing law enforcement professionals_) is a perfect recipe for disaster.


what are you arguing for? no analysis? or do you wish to change the analysis? Or why do you claim that analysis is premature to remain on topic?

saying "inherent bias" is very vague and has more value as a political dog whistle than anything else... unless you'd like to suggest what the biases are and point out how you have this special knowledge and other crime experts don't.

for example, what is the bias in incoming 911 calls?


> what are you arguing for? no analysis? or do you wish to change the analysis? Or why do you claim that analysis is premature to remain on topic?

There's a difference between a team of academics/statisticians analyzing policing and crime data and then debating the resulting conclusions, and a widespread set of predictive policing tools that are designed to be used by feet on the ground. The former is great! and important. The latter is the problem.

> saying "inherent bias" is very vague and has more value as a political dog whistle than anything else... unless you'd like to suggest what the biases are and point out how you have this special knowledge and other crime experts don't.

The entire point I'm trying to make is that it's very _difficult_ to figure out what your data biases are because usually the data is _all you have_. My point isn't that I have "special knowledge" about where the biases are. I do not have any such special knowledge. Domain experts will have more "special knowledge" and by that I mean through their experience they will have hopefully learned difficult lessons about the limitations of certain types of data available to them, but even they will not have perfect knowledge about where all of the biases in their data are.

> for example, what is the bias in incoming 911 calls?

again, I do not know! Do you? Do you feel 100% sure to say that the propensity of someone to dial 911 would not be dependent on socioeconomic factors, or on the nature of the crime, or on the alleged perpetrator? Do white people feel more comfortable calling 911 on black or brown people than black or brown people calling 911 at all?

You can try to correct for certain biases if you have access to all of the relevant info (e.g. in the above example, we could in fact try to answer this question if (1) we had the actual background rate of crimes being committed, independent of 911 calls, (2) all of the relevant information (e.g. race/income of caller and perpetrator, the crime type, etc.), and even then we would be missing other biases that we may not be consciously trying to "correct" for). But even this is subject to lots of error (how do we get the actual baseline level of crimes? hint: you can't), and will miss other factors that we haven't thought of.


Why not take your Bayesian decision theory and make a trading system? Invest your farm, and you'll make millions with ideas like, if a stock just had two upticks, isn't it a good idea to put a little money into an upward trending stock. Not only will you come to understand decision theory better, behavioral prediction better, you'll get paid to do it, and nobody else will get hurt!


The stock market isn't the same thing...


To spell it out:

The stock market is anti-inductive by design. Statistical patterns that reflect market inefficiencies are supposed to disappear over time.

Most problems are not anti-inductive.

So yeah, "you can't even predict the stock market with your methods" is a pretty terrible argument against methods that were never meant to predict the stock market.


Hmm. You make it sound like very new idea and tool is worthy of equal and fair consideration. If a polician today said they introducing No Crime Left behind, where cops randomly select time and place to swarm, would a rational person even dream of debating its merits?


You are talking about theory, GP is talking about the actual computer systems that police have today. And in fact GP already made good suggestions for bridging the gap:

> far stricter processes should be in place, more similar to getting approval for a new airplane or medicine


Ideal Bayesian decision theory, the one that beats every single other possible statistical method, is uncomputable in this universe. It's not a silver bullet.


NYC has used analytics since 1993 and it's widely credited as contributing to the incredible drop in crime. I don't buy for a second that ending these kind of programs will help anything. Certainly we can decrease brutality by sending police into areas with no conflicts but that defeats the whole purpose of policing. We need effective and aggressive law enforcement as much as ever. We need to root out the worst abusers and show them that bad behavior will be punished severely.

https://www.innovations.harvard.edu/compstat-crime-reduction...


Crime rates nationally dropped also during the same time - and not all areas had the same analytics. It's really not clear if the reductions were from the NYC stats or in particular "agressive" law enforcement.


From the article that OP linked to:

"Since Compstat was introduced, crime rates in New York City have dropped dramatically. From 1993 to 1995, the total crime rate declined 27.44 percent across the city."

National crime rates dropped as well in that period but nowhere near 27%.


Seems like national stats maybe lagged NYC a little, but from a slightly different range of 1994-2000 on this chart the drop was (eyballing) 35% for homicides.

https://www.politico.com/interactives/2020/police-budget-spe...


This is so utterly compelling a rebuttal that I can hardly believe the original argument was made in the first place. You can't argue that tool X led to outcome Y if everyone had the same outcome without tool X, and in fact the evidence then leads to the opposite causality: tool X is useless toward outcome Y.


True, CompStat is not at all "widely credited" with contributing to the nationwide drop in crime. Even its contribution to NYC's drop is debatable: https://en.wikipedia.org/wiki/CompStat#Critique


not only nationally, but worldwide!

https://en.wikipedia.org/wiki/Crime_drop

In Germany we have no "predictive policing" to my knowledge, and even cameras and so on are used conservatively given how privacy sensitive we ware. Yet we've seen the exact same development over recent decades.


NY police were working in Germany......... or

low cost computation made entertainment extremely affordable and time consuming


There are secondary effects to "tough on crime policies", and more severe punishments do not necessarily increase deterrence effects on crime, and could even cause more crime. There is a very complex relationship between the factors that influence crime which is likely anything but linear.

"Tough on crime" statutes were implemented across much of the US 40 years ago, and the results we've seen are anything but a success story. Sure, crime has dropped, but at the expense of becoming the world's #1 country by incarceration and causing downstream societal effects as we lose economic productivity, rip apart families (and potentially create new criminals), and create public distrust of police.

Some might say that is a reasonable price to pay for a decrease in crime, but that doesn't hold water when we it compare to the rest of the western world who saw the same or even better drop in crime without all of the side-effects of the 'tough-on-crime' policies. Globalization, technology, and a drop in poverty caused this drop in crime, not 'tough-on-crime' policies.

https://en.wikipedia.org/wiki/Crime_drop


I think your sentiment is probably in the right place, but the last thing we need are "aggressive law enforcement" officers.


I mean appropriately aggressive. There's still plenty of horribly dangerous people threatening citizens. It's part of the reason police in the US are so primed for violence.


Did NYC drop significantly more than almost every other city in America? Because they all dropped precipitously as well and most are not using these detailed analytics.


See http://www.disastercenter.com/crime/uscrime.htm

Between '93 and '95 total USA crime rate dropped about 2% while in NYC it dropped 27%.


> NYC has used analytics since 1993 and it's widely credited as contributing to the incredible drop in crime.

I see the "NYC did X in the 90s and it caused crime to decrease" thing pretty often, but from what I've read, the real reasons for crime reduction in NYC aren't well understood, and when compared with crime reduction on the national level, NYC isn't really all that special; crime was dropping at similar rates throughout the country (and the world, even). So I'm not convinced that a NYC-centric examination of policy is at all representative. Not to mention that NYC itself is not a representative place, so what works in NYC may have no connection to what works elsewhere.

> Certainly we can decrease brutality by sending police into areas with no conflicts but that defeats the whole purpose of policing.

That's not the issue. The issue is that police are being sent to places, and because the computer told them to expect crime, they are primed to find crime, even if it's stuff they wouldn't bother with under normal circumstances. The simple act of saying "this neighborhood is a hot spot" makes it a hot spot, regardless of whether or not it actually is.

Put another way: the computer sends the police to places where there probably are some problems, but much fewer than police are primed to expect, so they end up creating problems in addition to any they solve. They get this "warzone" mentality where they feel like they're going into an "us vs. them" situation, where anyone on the street is assumed to possibly be a criminal. That's a recipe for unnecessary violence.

> We need effective and aggressive law enforcement as much as ever.

Effective, yes. We severely lack this in many places and need to work hard to fix this. Aggressive, no. That's why we're in the position we're in: aggressive assholes on a power trip who just happen to also be racist and think they're above the law.

> ...and show them that bad behavior will be punished severely.

That attitude suggests that you aren't really interested in making society better, just that you want to punish people for doing the wrong thing. But I suppose this shouldn't surprise me; based on incarceration rates and the state of prisons in the US, it doesn't seem like anyone is interested in prevention and rehabilitation, just "sticking it to those bad people".


Crime has decreased everywhere since then (not just NYC) while the US prison population has increased 500% over the past 40 years.

> Certainly we can decrease brutality by sending police into areas with no conflicts but that defeats the whole purpose of policing.

Thanks for pointing out very clearly what you believe to be the purpose of policing. For a lot of cops as well, brutality is the goal. What we're seeing right now is Americans re-aligning those priorities.


Crime dropped faster and more consistently in NYC than almost anywhere else in the US. Plenty of big cities have been nearly immune to the national trend while NYC has done better and better.


Hannah Fry's book, "Hello World: How to be Human in the Age of the Machine" explores this topic in a balanced and thoughtful way. Of course algorithms aren't perfect, but they are definitely useful tools that should be combined with human expertise.

> humans do a far better job at real-world operations

The book points to several examples (specifically in law enforcement and criminal justice) where common sense tells us this is true ("humans do a far better job"), but statistics show otherwise. Human judgement is helpful to a degree, but without tools like these algorithms provide, it's actually far worse, and subject to all kinds of biases.

The question is, if a particular area statistically has more crime, does it really matter whether it's a human or an algorithm making the judgement to send officers there more often? I know it's taboo to talk about (at least in the past few weeks), but police presence overwhelmingly helps reduce crime, so lets be smart about how, when, and where to deploy more law enforcement. Let's use data-based tools to help inform our decisions, but let's also use human judgement to understand where those tools might fall short and act accordingly. In large part, that comes down to having data experts audit the tools and educate those who use them of their shortcomings.


I have no problem with that, but why aren't these types of statistical models and curve analysis being used to deploy drive by patrols? Some of this seems like throwing out the baby with the bath water, does it not?

While I agree that it's way too soon for this sort of granularity, is it too soon to have 10 block, increased presence of troubled areas helped by statistical analysis?


If crime being caught by a drive-by patrol is a significant way in which crime is being caught, to me that screams "victimless crimes". I break victimless laws all the time, sometimes with my police officer friends and neighbors present, sometimes on private property miles from any public roads. I never seem to get busted for it. So if 10-block patrols of "troubled areas" really saw an increase in enforcement action, then combined with the fact that America has disproportionately bad imprisonment and recidivism rates, it sounds like you're describing exactly what systemic discrimination is.

You live in a bad neighborhood, so you're more likely to get caught for something, now you're on parole, now anywhere you live is a "bad neighborhood" requiring more patrols, more likely to get caught for something. Meanwhile I can live in my entirely white suburb smoking weed and my life doesn't change.


If what you describe was the pattern, then the regions with lots of observed "victimless crimes" would be uncorrelated with the regions with lots of serious crimes (e.g. from counting bullet holes down at the morgue, or from ShotSpotter). That's great, this theory makes a testable prediction. It's falsifiable.

That said, I agree that the smart use of such predictions should be sensitive to this. Try to predict where & when the murders will happen, use that to direct patrols (and searches for potential murder weapons). Rather than predicting something simpler like "total number of offences including parking tickets" for the sake of chasing numbers.

(Both of these comments are describing where to put patrols, i.e. where to spend a limited resource to best effect. Prediction applied to human individuals is a very different story, and much scarier.)


> then the regions with lots of observed "victimless crimes" would be uncorrelated with the regions with lots of serious crimes

That doesn't seem like a strong hypothesis I would've made from the given scenario.

For example, one area that's consistent with the theory is that areas with lots of "serious crime" generate a lot of police activity, which turns up a significant amount of "observed victimless crime". That creates an expectation of a positive correlation between "serious crime" and "observed victimless crime".

Then, sure, according to that theory there might also be some areas where policing starts in an area on suspicion rather than serious crime and then we get a lot of "observed victimless crime" without as much "serious crime". That would be a negative correlation that you mentioned.

I just don't think the falsifiable hypothesis you've drawn up is one that the theory actually strongly predicts, and I don't think refuting that hypothesis necessarily refutes the theory.


I'm not saying it's the pattern, I'm saying it's a significant pattern. And I strongly oppose the idea that our law enforcement should be treating any individual based on "patterns" anyway.


Exactly. Vice or maybe Vox did a short with an ex-cop who mentioned this concept. He said they made a lot of arrests of black men carrying illegal switch blades. But said that white people carried those illegal blades just as often. Including many police officers that he worked with. But since they mostly patrolled in black neighborhoods, they of course made many more arrests and citations for carrying an illegal blade.

The fact that you find more crime where you patrol more is such a simple concept, it hardly seems necessary to mention. And yet so many people seem to think all this police data is useful.


There is an argument to be made that carrying a knife should not be a crime for anyone, and that we all have a right to carry a knife to protect ourselves or even use a tool.

But lets accept for a moment that reducing the number of knives on the street reduces violence and protects innocent people. If you accept that premise, then you have to say it's beneficial to society that the police remove as many knives as possible: whenever someone gets arrested for a knife violation it's a good thing, regardless of their skin colour.

And given that we have limited enforcement resources, we should focus most in those areas with the most knife crime victims. So while i accept everything you relayed in your post, the proper conclusion would change quite a bit if there are many more victims of knife crime in the black neighbourhoods... I do not know if that is the case, and i'm not implying the answer is more likely one way or the other.

But if in fact more people are getting stabbed or robbed at knife point in black neighbourhoods and you accept that knife laws are good in general, then it's good that such laws are more strictly enforced in that area, to help protect all the innocent people in that area who might fall victim.

And this principle really is colorblind. For instance many people speed in their cars everywhere, but we should more heavily enforce speed limits near high traffic and pedestrian areas and locations where accidents keep happening.

Now, maybe the rate of knife crime is worse in white neighbourhoods, in which case it really is a miscarriage of justice that more black men are getting charged for illegal blades. But I don't think you can say one way or the other without knowing and considering that statistic.


Are we assuming that police arrests and citations for carrying an illegal knife actually reduce or deter knife crime? Because I don't believe that it does.

--Same as you, just using knives as an example because I agree that they should not be illegal. (And a lot more could be said about certain types of knives being illegal and others not and the idea that certain racial groups or gangs might be more likely to carry that certain type, crack vs cocaine anyone?) But you could replace knives with any type of crime.--

But the reality is that we have a system that will over-prosecute and over-charge people for small crimes which causes a disruption to a lower income person's already precarious situation. Putting through a system full of violence, inhumanity, and other people that are in for much worse and then releasing them back to that same community with a do-not-hire sign permanently around their neck (a felony). After which they are likely to turn to crime to survive.

So while I agree that looking at knife crime data would make more sense than looking at arrests in the scope of the data conversation, the idea that heavy handed policing and a messed up "justice" system will likely create negative pressure on the long term for crime in that community, then the data is sort of moot.

Additionally, the conversation in this mini-documentary pointed to the fact that police were/are incentivized (at least in his Detroit precinct) to "get their statistics". So they would gravitate to these poorer, black neighborhoods because they could stop and hassle low income minorities without repercussion a lot easier than a white middle class neighborhood where people were less likely to even be walking outside and more likely to file complaints for police harassment.


Of course if the law is bad, the solution isn't to dismantle law enforcement, it's to fix the law.


I'm all for that. But systemically we tend to oppose that too. You do that long enough, eventually the ones on the bottom start pulling down statues. But the problem is both. We have victimless laws, and we have law enforcement departments with histories of covering for their own people when they break the law, or exploiting loopholes to railroad people.


How do you deal with police forces ignoring policy and performing as they deem fit all while protecting themselves?


Burn qualified immunity to the ground and salt the earth. This would be the simplest way to keep police accountable for their behavior. It would also imply a massive change in how policing is done (demilitarization), so that's two big wins.

We can lay a lot of the rotten police problem in the USA, and the world, at the fact that the accountability just isn't there.


I always thought only hiring police officers from the area being policed was a good start.

I don't mean that if you hire someone they have to move into the district. I mean that your pool of hires is the people who have already lived in the district for a particular amount of time (years).

The locals know what laws not to enforce and how to gain the trust of the community.


What you're describing is a reform of America's drug laws, not of America's criminal prediction problem and they shouldn't be conflated. Police already use prediction models, they just do it in their heads and inaccurately with much more bias and have a harder time adjusting those bias'. I'm all for changing drug laws and trying to reduce recidivism through counseling and means other than pure imprisonment, but that's beyond the scope of this article and discussion.


Drug laws are one example but by no means the only one. I've been on police ride-alongs where police watch a "suspicious" (i.e. poor) looking car as it passes them heading in the opposite direction. Oh look! They have a tail light out, we can pull them over. Then we can shine a flashlight in their backseat and see a bulge under a blanket. We can ask them certain questions that will either make us suspicious, or give us permission to search the car. Oh look! They have a pistol magazine that holds more than 15 rounds. Now there's a felony that they know the local courts will uphold despite there being a higher burden of proof about when the magazine was purchased than the officers can provide. Nevermind that the local gun stores all sell those magazines, and the police even buy them for personal use at those stores. But now, because of the way law enforcement works and the culture that has become normalized, somebody has a felony because of selective enforcement.


All of that is personal bias enforced by police. None of that can be balanced out for, and I don't understand how that's in the scope of this conversation, to be honest. With predictive police distribution (for lack of a better term), which would be placing patrols in high criminal areas (and for the sake of argument, let's put this at victim crimes, like sexual assault - tend to happen when bars let out around the most bar dense areas), an increased police presence putting patrols there would by there very nature reduce some sexual assault and drunk driving, statistically. People would be more likely to call an Uber, out of shear fear based reaction, and less likely to dump out into the street and beat the crap out of each other or sexually assault a woman in public. This is all done unconsciously anyway, but weights can be adjusted on a machine model int predictive modeling to try and remove the race bias out as much as possible, where as when such predictive modeling is done inside an officer's mind, unconscious bias is an ever present problem where weights cannot be adjusted as easily, if at all. If that makes sense.

A machine model of police deployment just makes for a more transparent model, so long as it isn't kept as a black box, or referred to as strictly "AI". If we arm our society with basic understanding of statistical maths and scientific grounding, they'll stop looking at these things as magic and relying on anecdotal experience. No offense meant.


AI amplifies human biases. If a few racist cops decide to target a black neighborhood, and "find" a bunch of crimes, that data gets fed into the system. The system then spits out that neighborhood as high risk and assigns extra patrols there.

Those patrols feel the need to justify spending all that time out there, so then they "find" crimes too. And then it just reinforces that data that that is a high risk neighborhood, making the whole thing worse.

The entire system is built on decades of bad data. Step one is cleaning up the data and/or starting over.


> I have no problem with that, but why aren't these types of statistical models and curve analysis being used to deploy drive by patrols? Some of this seems like throwing out the baby with the bath water, does it not?

Combined with the evidence of extreme LEO abuse of power, doesn't this reinforce the idea that certain people/neighborhoods become disproportionately affected? Isn't this exactly what the past few weeks of protests have been railing against?


If police with guns disproportionately hurt minorities that doesn't mean no police should be allowed to carry guns. You can justify drastic restrictions, but banning a tool altogether is being unnecessarily technophobic.


Even assuming the models are good, what confidence do you have that the data is good?


There are a lot of opportunities for the data to skewed in a highly biased manner.


Disproportionate policing of a demographic is going to lead to disproportionate application of the law, both spurious and justified, to that demographic.


Furthermore, the stated ideals of the PD/vendor might be far different from the actual metrics used to evaluated the PD (and, therefore, the software used by the PD).

You can pump up the clearance rate by practicing outright discriminatory policing. You can substantially increase revenue by targeting the poorest areas. Etc.


When we talk about this I think we really need to talk about Blackstone's Ratio [0] which was clearly an influence for the founding fathers. We're talking about peoples' lives and freedom, how highly do you value these?

[0] https://en.wikipedia.org/wiki/Blackstone%27s_ratio


Thanks for introducing me to this. Personally I think instead Schrodinger's Cat is more appropriate to legal policy. Measurement effects the result. In that excessive policing, where a people are treated with extreme prejudice as likely offenders, creates more offenders. Because the incentive to be a good citizen is eroded by your social guardians.


Having a physics degree and having worked as an engineer, I do not think Schrodinger's Cat applies here (often we in the physics community are frustrated in how SC is misrepresented).

Really what Blackstone's ratio is discussing is failure design. In good engineering practices we want things to fail in specific ways. So we actually design failure. For example, if you are building a skyscraper you want to design your building to collapse in on itself as to not take out other surrounding buildings. The trade-off here is that you aren't building the optimal structure, but overall you have greater safety (obviously you still have to meet base structural and safety conditions).

This is a better analogy than SC, which is about the complexity of statistics and observation (which mind you doesn't need to be human nor have a consciousness). Really here we're talking about how we design failure in law. "When laws fail, what should happen?" Blackstone's has nothing to do with measurement and observation (which mean more in the QM world than what they mean in our macro world or in engineering. We physicists aren't always great at explaining. Sorry :( )


I heard this recently, but I forgot where: policing software is supposed to be used to help inform decisions in addition to established investigative practices, but police are instead using the software as decision-makers which has possibly decreased accuracy in investigating crimes.


>because software becomes a black box.

Is the brain of a detective not a black box?

>But here we are talking about people's lives, far stricter processes should be in place

Yes. And I want these stricter processes to protect me from software as well as from humans. Why discriminate between software and humans?


A detective can explain their reasoning and thought processes, and a judge / jury / review board / whatever can make a determination based on that whether or not the detective acted properly.

Software doesn't defend itself, and often the algorithms behind it are secret. Even when they aren't, many of the models created today are just not explainable, even by those who have developed and trained them.


If explanation, reasoning and thought processes are required then make them part of the job description. A detective who can't explain his action is unfit for the job. A software which can't explain it's recomendations is unfit for the job.

Btw, explanations given to judges have nothing to do with what really happend. A detective can tell a lie and the software will pick the explanation which is most likely to be acepted by the judges.

>and often the algorithms behind it are secret

and the algorithms used in brains are open source?


It's not perfect, but at least a detective can be deposed.


You wrote a lot but pointed out none of the flaws of this tech. What’s so alarming about drawing a conclusion that more police resources need to be allocated to a place that has a history of criminal activity?


When an authoritative figure with no accountability (the software) tells you to expect bad people, you expect bad people. Even if there are no bad people, or only a few bad people, you will invent bad people because you are primed to find them.

So you end up punishing people for small infractions that are occurring everywhere, even in the so-called "good" neighborhoods that you're ignoring. (Let's also recall that broken-windows policing doesn't actually work.) This just reinforces the software's opinion that all the bad behavior is occurring in these same neighborhoods.

Meanwhile, cops enter the "bad" neighborhood on mental high alert, expecting a high level of trouble ("if the computer sent us here, it must be bad!"), even if there isn't much trouble at all. This creates an "us vs. them" mentality, which leads to dehumanization, and you can expect higher levels of police violence, especially of the unjustified kind.


Predictive policing does nothing more than sending policing resources where crime is more likely based on statistical models.

It does not racially discriminate and it is not racist. It does not harm anyone.

I feel that this is posturing and shooting the messenger. If crime is statistically higher in "black neighbourhoods" the issue will not be solved by pretending it isn't.

If the technology does not work then of course there is no point spending more money on it. So, does it work or not? Here this feels political, not pragmatical.


Everyone does something criminal from time to time. Policing A more than B will lead to more arrests at place A, increasing their crime statistics, leading to more policing ...

For example drug use is about equal between white and black Americans, but since black people are more strictly policed (more frequent in traffic stops etc) they are arrested and sentenced for drug use far more often than white Americans. [0]

0: https://www.hamiltonproject.org/charts/rates_of_drug_use_and...


> Everyone does something criminal from time to time.

Everyone? Maybe this would be true if you'd include non-criminal offenses (those you'd get a ticket for). Most people in their lives will probably jay-walk, or speed, or get a parking ticket, or something else in this category.

But you're saying that everyone (I'll read this as "most people") will do something that would get them arrested if caught from time to time? That's an outlandish statement.


Well... i haven't read the book, but it is widely cited:

https://www.amazon.com/dp/B00505UZ4G/

The author claims the average adults commits three Federal crimes a day.


I've had this conversation before and no one has been able to defend their position outside of "there's a book with this title" or "but weed though".

Even looking at the reviews of that book, it seems like the author doesn't even argue the title. It's just specific cases of people getting screwed over in court. (Which of course happens, I'm not arguing against that.)


In statistics there is a thing called bias, which can cause a lot of problems if not correctly handled.

An example of bias is historically most black people default on their loans. ML is deployed to predict if someone might default on a loan. Because ML does not understand bias, it sees the person is black and denies them for that, purely off of the fact that black people historically have defaulted more on their loans.

Bias is when ML sees something not relevant as a pattern and uses it as a feature to determine the future. Instead if race was filtered out, it might have seen historically most black people who got a loan were weak in other areas, like income or income stability or something else that actually factors in. It then could predict the future with a higher level of accuracy.

Police bias is worse than other industries, because it creates a feedback loop. If you think a black person is more likely to commit a crime, and you put more resources into that, then you're going to find more crime. This increases bias and it feeds on itself.

It seems the common fear on YC is the algorithms in predictive policing have a strong bias, causing problems. This is a legitimate risk, but imho not because of the algorithms but because of how they're used. They blindly give insights and police officers use this to increase bias, amplifying the issues we currently have.

On the NSA level the algorithms, which are not predictive policing, deal with bias much better and work quite well. They're scary good, better than having someone watching you at all times. Though, I guess that's a bit off topic.


> Bias is when ML sees something not relevant as a pattern and uses it as a feature to determine the future.

This alone does not fit the definition of statistical bias. Statistics is not capable of proving causation. So a completely trivial fact or feature can be a great estimator nonetheless. Of course if the system changes then you have a problem (with or without a causal model).

If historic data is warped by police of course, then you're using a biased sample and will indeed get a biased estimate that way. But if you correctly use randomly sampled data and a method that has been mathematically-proven to be unbiased, then if your model says to deny loans based on race, you would likely make fewer bad loans more by following it versus ignoring this information.


> It does not racially discriminate and it is not racist. It does not harm anyone.

This is a dangerously irresponsible statement.

If police are disproportionally spending time in specific neighborhoods, and arrests are disproportionally made from that population vs. actual crimes committed, this will be captured and reinforced in the predictions.

So yes, it's racists and it harms people.


So you're saying that more police presence is harming people. How? If there's no crime there are no arrests or reports after all...

Your reply is rather aggressive for not apparent reason. As I said this is highly political in the middle of the current hysteria.


> So you're saying that more police presence is harming people. How?

Ask the disproportionate number of Black men in prison. Ask white rapist Brock Turner why a white judge wanted to let off "because he's a good boy", or the Central Park Five or the Georgetown Jacket Three that spent decades in prison because white cops and prosecutors assumed their guilt based on the color of their skin.

If my posts sounds aggressive, it's because your posts are dismissive of the terrible culture that's led to the biased data that would be used to make predictions, and you have the gall to claim it's all fine and dandy.

> current hysteria

There is no hysteria going on right now, it's a quite reasonable response to decades of bad decision making on the part of police departments across the U.S.


You moved from police presence to sending innocents to prison... That's quite a step.

Predictive policing is nothing more than police presence. If police arrests innocents and the justice system sends them to prison that's quite another issue. On the whole I suspect that the number of innocents sent to prison is rather low.

There's hysteria alright on 'racial issues' at the moment.


> You moved from police presence to sending innocents to prison... That's quite a step.

How do you think people get arrested? They show up at a police precinct and turn themselves in?

> There's hysteria alright on 'racial issues' at the moment.

Yeah, I'm sure the Civil Rights & Suffrage movements were just "hysteria" too right?


> So you're saying that more police presence is harming people. How?

If the police were trained well, then I might agree with you. But they're not. They're trained to expect every encounter to result in an attempt on their lives. They're trained to escalate instead of de-escalate, meaning that a run-of-the-mill interaction is more likely to result in violence than it needs to be.

Labeling what's going on now as "the current hysteria" is painfully dismissive of the real harm that police are doing to people.


Any sort of bias present in the training data will be replicated in the model. If the police are biased in whom they target, that group will naturally show a higher crime rate. Which would easily be picked up in any sort of statistical model. Leading to a biased model.


> It does not racially discriminate and it is not racist. It does not harm anyone.

False. I agree that the software itself is not racist, but the decisions it makes are only as good as the data you feed it. If you feed it racist data, then you will get racist decisions from it. And given that policing has had racial biases for centuries, all we have is racist data.

And the more you act on its racist decisions, the more racist feedback it will have to act on, giving you more racist decisions in the future.


> where crime is more likely

This is the fundamental disconnect in your argument, because historical recorded arrest rates are not necessarily reflective of actual crime rates.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: