If I understand the story correctly, they had the suspect in custody and yet failed to positively ID him, relying on an ID he had stolen from an innocent man named Ousmane Bah. The ID had no photo and was labeled "not to be used for identification". SIS filed a police report which said they had video evidence showing the perpetrator.
They then put out a "BOLO" with the innocent man's name, had him arrested, figured out they have the wrong guy, do nothing about it and had him arrested again.
When Bah appeared in court to answer charges and Apple was asked to present the video that would have cleared Bah, Apple claimed that the video had been deleted. The video was later found by Bah's attorney during discovery. This happened more than once. A warrant was issued for Bah’s arrest and contained the photo of the imposter. Bah was arrested even though the he doesn't resemble the imposter, other than being Black. Prosecution against Bah continued in multiple states through June 2019.
Hmmm, seems like a lot of people involved in this case should be fired, including the Apple Employees, SIS Employees and Police officers, reguardless of whatever other outcomes there are.
> Hmmm, seems like a lot of people involved in this case should be fired,
"Fired"? This is tampering with evidence, contempt of court, filing false reports and so on, all done repeatedly. People need to go to jail, because people go to jail for those crimes and for lesser crimes as well.
Its outrageous. To coverup a lesser incompetance of not validating who the imposter really was, they tried to frame the innocent Ousmane Bah by deleting the video footage.
If Ousmane Bah didnt have a good lawyer he would be in jail and have his life destroyed already, such actions like this should be treated as a crime of falsifying evidence at least.
I have a hard time imaging, that the rate is lower for lesser crimes where the victims of the justice system can be coerced to accept a "lesser" punishment.
There are because proving innocence is very hard... That is why more pressure needs to put on the Legal System to only convict on an actual "beyond reasonable doubt" standard, unfortunately today some juries seem to operate on more of a preponderance of the evidence or worse "ehh he did something wrong" standard
Personally I believe the 4.1% estimate to be pretty low, and it one of the reasons I strongly oppose capital punishment, I simply do not trust our legal system enough to give them the power and authority to kill people
>Personally I believe the 4.1% estimate to be pretty low, and it one of the reasons I strongly oppose capital punishment, I simply do not trust our legal system enough to give them the power and authority to kill people
Agreed, because humans are fallible, and a jury is made of humans
The courts can always release on appeal. You can't un-kill someone
What's the difference between capital punishment and life imprisonment here? The amount of people that have their case re-analyzed to release them must be low. Plus, after decades in prison, coming out of it likely will make life very hard to get back into.
Well that is a very complex answer as I am not a supporter of the current prison system either. I am not a big fan of "prison punishment", instead I believe prison should be use to separate dangerous people from society, and not involve abuse.
That said in broad strokes, Death is final, and while reintegration may be pretty hard, if a person spent 20 years in prison because society made a mistake, I think that person should get a very large settlement from the state to the point where they never have to work again thus making that transition much easier.
But the odds of getting released are so low it's not really worth imo. I see life in prison more akin to torture than anything else, which makes it worse than death penalty
I do believe people should have a "choice" (lack of a better term as it not much of a choice) but I am still completely opposed to it as a "punishment"
I am also a supporter of legalized suicide for the general public as well, especially for the terminally ill
I think too that prison makes more sense for keeping physically dangerous people away. And if they don't assault anyone within X years in prison, maybe they're now a bit less dangerous.
Combined with some therapy or something.
But not as punishment.
For me, the main reason to oppose the death penalty is also that innocent people get killed.
He was thrown in jail several times (he was released relatively quickly, but that shit still fucks up your week) , and this crap made it difficult to get or keep a job for several years. That's not even considering that online articles which state he did it still exist so background searches are still problematic for him. His life was destroyed pretty thoroughly by this ordeal,and he was never even convicted of anything.
You'd be shocked how infrequently law enforcement officers get investigated, let alone prosecuted, convicted or sent to prison for anything you just listed. That goes triple for lawyers, lawyers have sent innocent men to prison for 30+ years for pretending exculpatory evidence didn't exist and they don't even get asked to apologize because it could throw other convictions into doubt. They should be thrown into doubt, but the thought that a murderer or mob boss might go free (even if that lawyer has only ever prosecuted non-violent drug offenders) is so powerful that lawyers have to be caught in the act and seen by thousands before they'll even begin to consider looking into the activities of a prosecutor. Defense attorneys on the other hand tend to be under paid and overworked, increasing the odds of a false conviction.
Remember kids, the criminal justice system has a strong bias towards conviction if charges are filed, so don't talk to the police. You're only helping them build a nonsense case against you, even if they've got another suspect you never know when a rookie cop is going to decide that getting you on Jay walking is good for his numbers this month. Unless you personally witnessed a serious crime like assault, rape or murder, shut the fuck up.
Agreed. At the very least, if Apple-the-company isn't being fined (or their employees held criminally liable) for failing to produce exculpatory evidence in a criminal case that was later found during discovery, the court itself isn't doing its job.
I guess that would depend on whether the failure was due to malice or incompetence. Maybe also on whether it was a failure of individuals or a failure of the system. But yeah, something obviously needs to be changed.
But treating them as a homogenous malign group is a making a political statement.
Although I doubt any of the personnel making these decisions could be reasonably described as rich. It would be interesting to know if there are any internal investigations, procedure improvements or disciplinary proceedings going on in SIS or Apple. I suspect we won't find out until after the court case outcome, if ever.
In a sane world, that would be destruction of evidence. You shouldn’t be able to tell the court evidence exists and then destroy it before you let anyone else examine it.
As I understand, the evidence wasn't owned by prosecution, but by Apple. It's like saying "the truth is out there", which is technically true, but shouldn't count as evidence until collected.
Then why was it used for an arrest warrant? Should evidence not be examined before the warrant is issue? Or can you just claim to have video evidence of a murder to get someone you don't like arrested?
Prosecution should be held in contempt of court if they fail to keep their own evidence securely, including the platforms and locations of storage.
I think prosecutors should go to jail if they keep their evidence in a building covered in kerosene and then "whoops" the evidence has gone up in flames.
It's a lot more complicated than that due to edge cases like format conversion, resolution downscaling, chunking, etc but people are working on a solution in this vein: https://contentauthenticity.org/
Full-disclosure: I work (on completely unrelated stuff) at one of the organizations involved
Thing is... its just a day job for these people and they probably get paid less than us devs. They have families, complicated lives just like we do. Stuff gets in the way of doing a stellar job sometimes. I don't think lazy devs that create buggy software should be charged with a felony, they just shouldn't get promoted or should change careers.
Having been falsely subjected to criminal charges myself I don't really agree. Having your pizza arrive late and cold is 'less than stellar'. Being jailed is a whole different experience, and though it was not for very long and the charged were ultimately dismissed I count myself very lucky for having had the support and resources I did. Many others are less fortunate and suffer far more.
You are probably lucky enough to be a US citizen. If you are an immigrant an arrest is a problem everytime you need reapply for a visa/residence permit. The question in the paperwork is always "have you been arrested", not "have you been convicted of a crime". In this case you'd like to see lengthy prison sentences for all involved, the whole chain of command.
What if "lazy devs that create buggy software" literally destroy peoples lives?
Blameless postmortem are fine only if steps are taken to uncover and fix the problem at the root cause. There's no excuse for systematically jailing innocent or pushing buggy software.
In this case, what steps were taken? Video was deleted!
I've been saying for awhile now that developers who work in life critical applications should be held to the same licensing and educational standard as civil engineers. There should be legal difference between a "coder" who can be unlicensed and not formally trained working on non critical software and a "software engineer" who is educated, licensed and bonded to work on stuff such as medical, automotive, aerospace and control systems. There should also be some kind of equivalent to the various engineering organizations that sets standards and norms.
Well, maybe it should. I'm not in favor of handing out prison sentences, but that's probably what the victim faced. Eye-for-eye is a bad strategy, but no consequences for such accusations is not a solution.
I doubt that. Eight figures is probably enough to for Apple to pay in house legal staff to stay locked in slow motion litigation with this guy for the rest of his life (or at least until he's exhausted his own financial ability to litigate).
This changes the whole game not in favor of Apple. And adds few zeros to the cost for Apple to try to wiggle out of this clean and not admit racist bias.
Welcome to america.
The white guy would get $10 off apple store offer from apple if he was lucky.
> stay locked in slow motion litigation with this guy for the rest of his life (or at least until he's exhausted his own financial ability to litigate)
How is this even possible? Why don't judges recognize this obvious abuse and stop it?
Recognize? Probably. But it's easy for courts to come procedure bound and even feel like they're doing you a favor, e.g. "I gotta do this by the book so its not overturned on appeal."
They key probably isn't litigation, but more stories like this that paint Apple as having bad technology and/or racist employees...even if that's not exactly what happened.
Companies, police and politicians always freak out when someone doesn't want money, but just want them to admit to doing something wrong. Getting a $1Mill is easier than getting an apology.
Getting a fake apology is easy, getting actual admission of wrongdoing is hard. The former is embarrassing, the latter can be used as evidence in later lawsuits. This is precisely the reason why criminal prosecutions of companies result in the company admitting no wrong but coincidentally agreeing to a 8 to 9 digit fine.
I think the difference is between a fake "We are sorry you feel this way" and a real "we messed up big time on this. For starters, here's what we will do to make sure it doesn't happen again".
Well of course. Getting the state to admit it was in the wrong is a precursor to get the state to stop doing something and the state absolutely hates giving up power.
Doubtful. The maximum damages that a court would likely find for the defendant -- even if Apple admitted all the allegations -- would be much, much smaller than that. (Consider that even wrongful death settlements or injury settlements are at most a few million; and this alleged victim's injuries were only reputational.) Settlements often boil down to a fraction of the maximum theoretical damages, so I expect the value to be in the 5 figures at most.
> Hmmm, seems like a lot of people involved in this case should be fired, including the Apple Employees, SIS Employees and Police officers, reguardless of whatever other outcomes there are.
How might a government systematically disincentivize this sort of malicious and negligent behavior without it backfiring? And why hasn't it happened already?
I imagine the matter has been thoroughly explored by legal scholars, but I'm having trouble finding publications. Can someone point me in the right direction?
The US has systematic incentives in place to create this kind of system: private prisons and elected local chief prosecutors. Both of those are incentives for false positives.
In the US all police accountability is local, which means that the >51% of the voters who prefer being tough on crime (more false positives in the hope of more "true positives") get what they want.
Stop punishment for simple mistakes, it starts with parents and school.
I'm in a safe environment where i can say i made a mistake and document it. And still i have a hard time doing it because how i grew up. It was much safer to be quiet or lie and get away than trusting authorities or parents to not punish me.
But removing that threat would change how the system works and how ppl deal with mistakes.
Interesting, and yet all the people that worked for me massively respect me when I point out a mistake I did towards them in front of everyone else. It's probably related to the fact that I tend to be very strong when making those decision, but when I publicly announce my mistake in front of their peers it builds trust.
I basically learned that because I hated the bosses that would take credit for their subordinates ideas and then blame them when something went wrong, so I'm trying to do the opposite.
Yes, but we should still punish negligence (and in my opinion harshly). That way people are encouraged to fix mistakes instead of ignoring them like they did here.
it backfires due to conflict of interest. the set of people responsible for enforcing such a system have always been the same set of people responsible for making the mistakes. it would take a duplication of the entire judicial infrastructure and division of policing power.
this essentially exists in the united states with the dual state and federal systems, but the local systems are extremely resilient against investigations, and the federal system has no interest in actually performing this role.
in rare cases there have been FBI investigations and federal oversight of entire departments, such as the Seattle consent decree, but there are many problems with such arrangements as covered by local journalists and activists. i'll see if i can dredge up some links later.
They can't, which is why such incompetence is plaguing every single bureaucracy in existence. If the bureaucrats have an incentive to not make mistakes they will err on the side of caution and only deal with cases where certainty is high - as opposed to putting in more effort to ascertain what is actually going on.
back with links. seattle-focused. seattle's consent decree is not about correcting investigatory malpractice, but simple excessive violence and racial bias.
federal oversight means the consent decree is subject to national politics that may be alternately undermining (executive branch republicans promote impunity) or disinterested (executive branch democrats fear criticism of reform efforts).
the decree is also affected by national events and attention. the late nationally-motivated police reform efforts are running into roadblocks as officials claim those goals conflict with compliance.
notably, an internal police watchdog body created by the consent decree and charged with investigating officers, may only recommend action to the chief, who can simply reject it. the watchdog body is staffed by police and not civilians. there are some examples in the timeline above but here's one from last year.
and the police union is effectively able to write law that undermines oversight. the union contract actually supersedes city law on police accountability. this is a failure of the city council but was agreed upon by the overseeing judge.
after nearly a decade of effectively resisting federal oversight and reform efforts, spd has developed an attitude of total impunity. a protest marched to the SPOG office last year, and at no provocation was dispersed with teargas and impact munitions as officers blasted a pop country song over loudspeakers. https://www.thestranger.com/slog/2020/09/08/44432799/police-...
I disagree that it is just Americans. In real terms language evolves on its own and people tend to imitate the pieces they find useful for one reason or another. I remember living in the old country as the Iron Curtain fell. All of a sudden, adding English words into just about any conversation was a thing to do. Some of it was, because word equivalents were not quite ready yet; some of it was fashion.
I think cop porn is just another language phase in US.
> The ID had no photo and was labeled "not to be used for identification".
Can someone explain the logic of this? What is the purpose of an ID that shouldn't be used for identification? And why would anyone in their right mind issue a non-biometric ID in the first place?
It’s basically a membership card all it shows is your name and member number, might even be used as a stored value card for printers, or as a door lock RFID/NFC card.
The article says it was a temporary drivers permit. This is what you get in the US while you're learning to drive but before you have a full license. It's usually just a piece of paper, and you can't drive alone in the car with it.
It's what you walk out with from motor vehicles while they produce your actual learner's permit with photo on it. The learner's permit will look just like a driver's license, with some restriction codes on it most likely. It may possible have some glaring mark, but otherwise look and feel like a driver's license.
Most places you won't see a temporary permit like this as they'll have a machine on hand to make the license, but I suppose not every location has that available. It is only meant to last until your photo and information gets to wherever they print the cards and your card gets mailed to you (less than 2 weeks).
Note this would vary by state and possibly even county (e.g. driving laws in Manhattan are different than in NY state). I have no idea what the NY state DMV / permit process is. But in my state it was a piece of paper with no photo ID.
I have this same feeling towards a lot of tech blogs/articles/etc. The people writing use an acronym daily, yet people not in that field have much less familiarity with the same grouping of letters. Journalism/authors has (had?) a rule about clearly defining the acronym the first time it was used, and then just use the acronym for the remainder. Now, that definition seems left out altogether. It could be a sign of lack of formal training in writing, but I know I was taught it in high school level grammar class. <shrugs>
"Hmmm, seems like a lot of people involved in this case should be fired,"
Perhaps, or a culture should be created where there should be no fear of negative repercussions for taking ownership of something? Seems like a typical organizational thing where responsibilities are not clear and people are afraid to take it.
Edit, not saying this is not unforgivable. Just saying a critical look at the organizational level would be good.
And the key to creating such a culture is to make the consequences of hiding mistakes substantially worse than the consequences of owning them. So either you reward people for making mistakes and coming forward (creating a perverse incentive to make mistakes) or you punish those who hide mistakes especially hard. Bonus points if in the process of punishing people who hide mistakes, you also remove those people from the organization, thus increasing the proportion of people in the organization who do not hide mistakes.
>Apple then told the Suffolk County prosecutor "that the video evidence of the impostor, which would have completely exculpated Ousmane Bah, had been routinely deleted."
Why would anyone routinely delete evidence of crime instead of making a copy of it before it was deleted? What's the point of having surveillance equipment and footage if it's just routinely deleted even if there was a theft?
>The video from an October 2018 theft misattributed to Bah in Rockaway, New Jersey, was also deleted.
Oh wow, again?
>And as it turned out, the video of the Boston incident turned up eventually – Bah's attorneys found it during the discovery process. It showed the impostor, not Bah.
They should open a criminal investigation into what happened with the video. Intentionally accusing someone they know to be innocent, taking their time and freedom is completely inexcusable.
> Why would anyone routinely delete evidence of crime instead of making a copy of it before it was deleted?
(speculation) because the prosecutors preferred to ruin an innocent man's life over appearing incompetent... it technically risks evidence tampering, but they can probably just argue it back to incompetence so they have nothing to lose.
Businesses (at least in the US) try to write the shortest possible data retention policies possible so that evidence is hopefully deleted before any litigation begins. If you had a written policy and you hadn't received a court order before deleting, you've won.
I think there should be a carve out for evidence provided to the state or the defense. If you provide a sworn statement that evidence exists and then destroy it before the court is done with it, you should face contempt of court charges.
Making videos is so cheap and simple now, that not having video material should be reason to dismiss a case entirely. It's the retailer's responsibility to collect proof and present it when asked.
>identified the individual as Ousmane Bah based on the examination of the temporary learner's permit he is said to have had on him
So essentially a kid stole another kids ID. Because the (thieving) kid had gotten in trouble with the same ID before it was also a match in the database.
It does seem like a true failure but the headline makes it sound like this was an AI falsely accusing someone based on pure randomness. Even without the tech, the police would have had the name of the wrongly accused person because the security guard only had the fake/wrong ID and the person's ability to match faces was not great.
> the headline makes it sound like this was an AI falsely accusing someone based on pure randomness.
no, it doesn't. the headline makes it sound like facial recognition is unreliable for some reason.
the reason, in this case, is that the Apple store and/or SIS personnel gave the recognition system Ousmane's name when they weren't able to prove it was him, and the document provided for identification was not actual proof of identification.
Further facial recognition matches assigned further blame to the misidentified person, increasing the evidence against him.
Then, because store owners and security firms really want to put away shoplifters, they claimed that video proof of the shoplifting was routinely deleted when there was no policy describing a deletion policy within Apple or SIS, only for video proof that the individual was not Mr. Bah to later be found.
in this case, facial recognition tech was unreliable entirely because of the people running it.
That's the case with all "bad AI" stories.
Remember, an AI can't make a mistake until we have true AI.
Until then, there's always human error - either the operator, or the programmer.
In this case, it was obviously a mismatch between how the different operators treated the 'name' field - one operator put in an uncertain ID, and another one trusted the system to be correct.
These stories are IMPORTANT. They might help someone think twice before either using or blindly trusting these systems.
There's still a huge difference between "people put in the correct parameters and AI nevertheless got the wrong result" and "people put in incorrect parameters and AI isn't a miracle worker that can correct it for them."
You wouldn't title a story about a guy hitting his thumb with a hammer "Man attacked by unreliable hammer"
If the teen was falsely accused and the false accusations came from the facial recognition technology then the facial recognition technology was, in fact, unreliable.
The false accusations didn't come from the facial recognition technology. They came from believing the criminal using this falsely accused person's ID card which did not have a picture on it. Facial recognition tech correctly matched the face of the criminal to the other video of the criminal.
While technically true, the use of tech is still notable here because it was used to launder the mistakes of humans.
There is an incorrect tendency to ascribe intelligence to AI, which can make especially laypeople prone to assume that if an AI system makes a statement, that statement has a certain sort of validity to it that isn't necessarily justified.
Garbage in garbage out applies to AI just like it applies to any other software system, and society needs to develop more resilience to this fact.
You didn't engage with the core of the argument, which is that the veneer of AI is used to launder the human mistake.
Increasing the distance between database input and output gets in the way of human heuristics for recognizing and fixing such mistakes. Yes, the same could have happened without AI, but the probabilities are different.
Exactly, it's just that "Teen accused of shoplifting due to unreliable data entry" doesn't get as many clicks. It's not like the database itself was unreliable, or the facial recognition was unreliable.
Drum up false charges and they get automagically mapped to the face with nobody able to undo anything. They're now forever a criminal.
Honestly, it sounds extremely dangerous in a field such as criminal investigations.
This is precisely why the "fear" or rather stigma around facial-recognition needs to go away. If it weren't there, it'd have been expected and granted that the government's job is to have valid, verified and correct biometric information on the citizen (because they're generally the ultimate/final custodian of people's identity), and the facial-recognition would have to match against that verified data, instead of some random video or fake ID.
So instead we have scattered silos of partial and probably incorrect data in various places. FBI has a bit, Police department in state X has one, State Y's DMV has something wrong because X years ago this person was a victim of identity fraud, etc etc. It's a giant mess and no wonder the police and various entities in this example failed utterly. They're ultimately all victims of a system that has no single source of truth, yet we're tasking them with using "tech" to solve crime. So instead of fixing the source of truth, we blame the tech.
The stigma is there, because if it was in one centralized database, the issue would be much more pronounced. It is a lot easier to destroy a company than a government.
> in this case, facial recognition tech was unreliable entirely because of the people running it.
Wrong.
In this case the facial recognition tech itself was arguably not unreliable. The human operators were the unreliable factor.
The recognition tech reliably tagged the impersonator as Ousmane exactly as it was instructed to do. The system worked exactly as intended. It is the human operator whose intention was wrong.
This has nothing to do with AI being unreliable and everything to do with the employees of this SIS company going "yep kid's black, he's the one who did it" without half a thought.
Separating the two things (operator and technology) is merely a technicality, in reality they are not transparent to the general public and should be treated as parts of the same.
If AI™ only happens in a black box behind closed doors with people getting the opportunity to make inconvenient results disappear and that is basically the way it is trying to be established then no, it is not reliable.
The operator is part of this whole system, if it can't be used without the unreliable operator, the tech is not reliable.
> In this case the facial recognition tech itself was arguably not unreliable.
You could train a fake money detection AI with cat videos from youtube, it would probably do something with images of cats. However I hope no one would try to argue in front of a court that the resulting AI would be reliable at detecting fake money. In this case a stolen "do not use as id" slip was apparently good enough to serve as input validation, I would be surprised if their database wasn't overflowing with bad data.
The AI correctly identified that the faces of people in two videos were the same person. It did it's job. The person responsible for correctly identifying the person in both these videos failed to do their job correctly.
IMO, the operators of the AI are part of the production execution of that AI. There were not proper constraints in place to prevent this from happening, and it wasn't fixed after the first time(s) that it occurred.
Sure you can argue that the pure tech worked as designed, but the system includes the human element as much as the tech element.
The operators made it worse of course, whether enabled by AI or not. We've been dealing with this exact same issue, without AI, for decades when it comes to identity theft.
These "operators" that you say are part of the production execution of "AI" are also part of every single institution that led up to this person being here and existing. Everything from registering their birth, capturing their DMV details, capturing their details when enrolling at a school and opening a bank account, taking their photo for the driver's license, etc. All gooey, imperfect, corruptible and fallible people that taint the "chain of custody" about the person's identity.
This has everything to do with AI being unreliable. AI is unreliable. It was unreliable yet again here. Anyone relying on AI as the evidence in a prosecution needs to have their faces rubbed in the ignorance. AI is as unreliable as any human being. In fact, it is more unreliable because people understand the limits of the reliability of humans and look for corroboration and examine the evidence with a mind on its potential shortcomings. Yet there is still this pervasive and pernicious belief that "Computers don't lie" which is completely and utterly false and worth repeating often, with emphasis until it is part of human collective understanding of the world to the same degree that water is wet.
"Computer says x" by itself counts for nothing. Anyone presenting evidence like that by alone should be presenting a huge red flag that something nasty is going on, via either malice or sheer incompetence.
The AI was reliable. It correctly identified that the individuals in two videos were the same person.
The problem was that one human recorded a name that they should have known was not real, and another person read that name in a report and arrested someone with that name. The story would have been no different if a human had correctly recognized the same person in both videos.
You seem right to say it is not a case of AI falsely accusing someone based on pure randomness, but that is no reason to dismiss what happened here. The most troubling aspects are that SIS added what amounts to false information to its databases, fabricated multiple false cases against Bah, claimed to have evidence it did not have, used it to detain him and hand him over to the Paramus police for arrest (who failed in their own due diligence, partly on account of this false information), and then attempted to hide what it had done, including, apparently, destroying evidence - evidence against itself, not Bah.
If a DNA test identified the wrong suspect because someone mixed up the names when entering them into a database I wouldn't say that there was "unreliable DNA tech".
Maybe other people didn't read the headline as I did. But if I hadn't bothered to click on it I 100% was just going to assume it was because the tech itself was unreliable and they picked out someone who looked like him.
Perhaps if I wasn't read up on the debate around this I wouldn't have thought that - but I imagine most people would assume this with confidence if they saw the headline on their feed.
You are making a valid, but relatively small, point about how the article is written. People who think they have learned something important from a headline without verifying what they think they have learned by applying critical analysis to the article are likely to be misinformed in all sorts of ways regardless of what they do read.
Note that we are both talking about other people - we, of course, are not like that!
Not just another kid, but a friend. Ousmane Bah is an honors student, while Mamadou Barrie is potentially responsible for a string of thefts and the impersonation of his friend to the police.
> "The arresting officer was able to identify the impostor as Mamadou Barrie, a friend of the Plaintiff, who apparently stole the learner’s permit from the Plaintiff," the complaint says. "These arrests specifically [noted] that Barrie had pretended to be Ousmane Bah."
> The detective subsequently submitted an information request "to the NYPD’s Facial Identification Section (FIS), which identified the photograph as potentially depicting two people, one of whom was purportedly Ousmane Bah – and the other was the actual thief, Mamadou Barrie."
The headline is accurate because that's one subject of the lawsuit. If I'm understanding it correctly after skimming through the filing(read especially page 6 and 9):
1. the plaintiff argues that in Boston case (page 6) apple store only took less than 10 minutes to identify the shoplifter, so they probably used facial recognition tech. Note the shoplifter left the store, apple found the incident, report it to police within 10 minutes, without actually detain the person.
2. I'm guessing the facial recognition is actually accurate because it was based on their previous security video, not plaintiff's real id photo(which apple probably don't have access all the time). So the facial recognition just linked one person with previous security video properly, and the id of person was based on previous record made by employee.
3. Thus there is nothing wrong here with facial recognition, but the title will definitely evoke fear and people's emotion since most people will not see the details.
The AI falsely accused someone because the AI was fed garbage data. If you’re going to build an algorithm that can levy such serious charges against people it certainly seems negligent to create a process that allows random unverified training data to be input into the system.
There are real people who suffer real consequences because of the programs we write. When things go wrong we don’t get to point the finger at the algorithm and the users and take no blame as the developers.
I've been having this discussion a lot lately. Ok, let's say a crime occured and you have a hard time identifying the suspect. A colleague of yours in the precinct is walking over and sees the picture on your screen. "Wait a minute, that face looks familiar." He goes to the database and indeed sees a face that looks close enough. Now what is the next step? You don't toss the person in the police van and celebrate the conviction. Instead, you investigate.
"Where we're you that evening? do you own a outfit matching the suspect? Etc." This kid would have been quickly cleared. It's already terrifying to be questioned by the police when you are innocent. How do you think he felt when they said they had video evidence of him shoplifting. They already made up their mind.
Now replace your colleagues with an AI. Simply because you have a match does not mean you have the suspect. Just over a year ago we were discussing the same issue: https://news.ycombinator.com/item?id=23628394
I shared an office space with a company that provided video analysis and recognition as a commercial expert witness service. (This was in The UK).
They had a system of levels of confidence they would use in their reports when identifying someone.
The highest level of confidence was achieved when a close friend or family member identified the person in the picture of video.
There were progressively lesser levels of confidence for people less close to the purported subject of the image.
They also had a system of points of similarity/difference. but they normally only used that to establish that the same subject had been observed by multiple cameras, rather than to establish the identity of the subject.
> A colleague of yours in the precinct is walking over and sees the picture on your screen. "Wait a minute, that face looks familiar." He goes to the database and indeed sees a face that looks close enough. Now what is the next step?
That depends on if the goal of those using the technology is arriving at the truth or if the goal is naming a suspect who you can reliably stick a guilty verdict on.
Apple and SIS have a qualified law enforcement privilege that allows them to err in store security-related accusations and not be sued for it. However, if they exhibit "reckless disregard for the truth" – ignoring obvious facts, for example, they lose that privilege.
Curious to wonder if they mean they just lose that privilege on a case-by-case basis, or if they lose it for all future errors in general.
If it's the latter, then Apple is in serious trouble and they'll probably end up paying this kid big time in a sealed settlement so they don't lose this privilege.
I am fascinated by this subject, because of how scary misapplication of AI can be. But this has very little to do with AI/facial recognition tech itself.
The name was picked up from a lost learner's drivers permit, that was found on an suspect while in custody, the permit did not have a photo and then this stolen name had been propagated further and attached to other cases. They used a paper without a photo as identification. It actually says right on it, that it is not to be used as identification, WTF?
> In addition, it asserts Bah's apprehension was in part due to the application of unreliable facial-recognition technology in the shoplifting incidents in New York.
Vague statement, unclear what exactly role of the facial recognition was.
> The detective subsequently submitted an information request "to the NYPD’s Facial Identification Section (FIS), which identified the photograph as potentially depicting two people, one of whom was purportedly Ousmane Bah – and the other was the actual thief, Mamadou Barrie."
Seems like the software did what it had to: identified two people.
The article also says that they were friends, so it is not surprising that were both present on some frames. It can be source of some confusion as well, not tech related though.
> The video from an October 2018 theft misattributed to Bah in Rockaway, New Jersey, was also deleted. Apple and SIS are said to have told the New York court that neither firm has any written policy on video retention.
Yeah, this is fishy. I am very doubtful they don't have any written retention policy.
Note that the Connecticut learner's permit does have a photo ID on it - the article gets it wrong, as the complaint states that the learner's permit he had was actually a New York learner's permit[0].
However, on that note, does anyone have a photo of such permit? All Google images of 'learners permit new york' have photo IDs; the closest thing I can find is the post-test temporary full license[1], which doesn't include information like the person's height. The exhibit is also restricted on Pacer[2], likely because it contains PII.
>However, on that note, does anyone have a photo of such permit?
I have a temporary non-learners license but due to PII I'm not taking a photo of it. It is a printout that contains everything except a photo (so name, height, eye color, etc.). It's used as a temporary license until the real one is mailed to your address. This is distinct from the post-test printout which would be, I'm guessing, used in conjunction with a learners permit license. The temporary license printout is used on its own without another permit which is why it contains more info.
Run a security company, contract with public officials, you're most of the way there. Government can be a powerful social technology at best, but at worst it's just a protection racket.
Apple chief of security was recently indicted on a charge of a bribe to Santa Clara county sheriff, and the case was dismissed last month. There are 3006 counties in US, and one can let the imagination run wild :)
I don't want to take away anything from your argument but that case is a bad example. There was no evidence to show corrupt evidence and it was dismissed as such.
Now, as then, facial recognition is being used as clickbait, when all that really happened is that this guy stole an ID, got caught shoplifting in an Apple Store with that ID, and thus had the information on the ID incorrectly associated with his face—either in a facial recognition database, or simply by the photo being used with this stolen ID.
My understanding from the previous story was that it was not Apple that took the information from the ID, but rather the police, who gave that information to Apple. It looks from this story like there's also a third party involved, SIS, the security contractor, which, of course, further muddies the waters.
Not only is it obvious that the technology is not working, but this also shows blatant incompetency on part of users of this tech. This no longer counts as "the algorithm is flawed". The whole system is wrong.
A mistake is one thing, subverting safeguards and deleting data is something else entirely.
The developer who developed the system to falsify EPA results for Volkswagen went to jail for 3 years. The judge specifically cited that he had a professional duty to not commit illegal behavior but chose not to because he didn't want to lose his job.
Based on how key software is in so many things, we're going to be faced with a choice soon:
a) As an industry, clean house and hold ourselves accountable to better practices, uncovering and fixing honest mistakes, and punishing misbehavior; OR
b) Do nothing and let regulators write the rules for us.. which we know will be outsources to the big consultancies like Deloitte, BAH, PwC, KPMG, etc.
If we don't all get onboard for A, the alternative will be ugly.
"The lawsuit contends that Apple and SIS exhibited reckless disregard for the truth by misidentifying Bah as the perpetrator of multiple shoplifting crimes at iStores..."
The article jokingly refers to many things as i(Anything). iGiants in the beginning, for example. It’s an odd, and poorly executed attempt at humor/fitting in, but that’s what it is
Hope that kid wins so much money he can just drop out of college now. This is such blatant malicious activity on both these companies parts. Deleting video evidence and confirming with police based on the acquisition of a permit not allowable for identification. If you can't even use the ID to vote, why do you get to use it to incriminate someone?
This whole sub-contracting policing out to private companies is going to end up being a constant state of witch trials. At least the state is the law. These two companies aren't in the slightest.
Your answer doesn't address the question: "why do you get to use it to incriminate someone?"
But the point remains: if an ID card isn't considered valid for voting, then why should it be considered valid for identifying an individual for a crime? It is a valid ID (identification record) or it is not.
If you showed up to vote with a learners permit, you would get some funny looks and then be shown to the voting machine. Nobody asks you for ID at all. That's what the current political controversy is about – whether to even require any ID.
> "why do you get to use it to incriminate someone?"
The thing is that it didn't - he's getting heckled due to warrants for his name, but charges against him have always been dropped when the DA has seen 'oh this isn't him' - not because these DAs are nice people who see the misunderstanding, but because the evidence is not strong enough to prove he committed the crime. That's why the case is for "320 Assault Libel & Slander" against Apple & the security company, not false imprisonment against the police department(s) (who have reasonable cause to detain suspects).
The learners permit was just evidence being used to ID someone. Everyone is getting moronically hung up on the fact that it is or isn't an ID for legal purposes. The fact that it has ID info on it is immaterial. It could have been a debit card and the same thing could have happened. It was just a document that a with a name of a guy who matched some superficial criteria. Following up on the lead was where everything went off the rails.
This doesn't address how being able to vote has anything to do with it. You can't vote with the picture-included permanent learner's permit or a real driving license in New York either if you're under a certain age.
It gets worse - "a different individual with the same name "Ousmane Bah," this one a resident of Willingboro, New Jersey, showed up for the summons. He was not the thief, the complaint says"
Ah, the US, the only place in the world where judging a social situation by the skin colors of the participants is not just acceptable, but also the way seen as most virtuous.
Maybe, maybe because pointing out such a thing is evidence of a bigger problem (systemic discrimination & racial bias) that has been demonstrated to not be only exclusive to the realm of human investigators, but also in a lot of facial recognition software (that both trains on a disproportionate amount of black faces and is poorly able to differentiate between distinct black faces).
It isn't evidence, at best it's conjecture, and in this instance a dumb conjecture.
This isn't a case of a facial recognition software failing to distinguish two black faces or people going after a black man disproportionately, this is a case of a repeated shoplifter giving a fake name, and the person whose name was given being arrested.
There are real cases of systemic discrimination and racial bias having terrible effects on peoples lives; crying wolf makes it harder to get people to recognize and address the actual problems.
You really think the way this case was handled (referring to the wrong person being arrested, repeatedly) doesn't have anything to do with racial bias?
They didn't go out to find a random black man to harass because they "fit the description", they went out of their way to find and arrest the specific person who was indicated to have been involved with the crime.
Also this person was not arrested repeatedly. He was arrested once and then had the charges dropped when it was clear he wasn't the right person, and then received a mailed notice of a warrant for his arrest from a different county two weeks later for a separate offense (both instances of shoplifting falsely attributed to him). In a third precinct, after a third incident, a different person with the same name was summoned, and also quickly identified as not being the right person.
This is a case of the left hand not knowing what the right is doing. It is a problem that can have serious consequences, but it is not related to racial bias.
From what i know about the US, racism is one of your lesser issues. Its just very, very loudly overemphasized because it distracts from the failings of your social, healthcare and governmental system. The poor people suffer the most from this, and take a guess who is disproportionately poor.
I live in the US, and your comment has accurately and succinctly summed up the situation. It's good to know that at least some outsiders have the correct perception. Like, seriously good to know. Thanks.
It's one of the larger issues that could reasonably be amended. The more fundamental issues regarding capitalist corruption, ineffective government, and unsustainable growth, while much bigger, are an effectively unsolvable side effect of our society's affliction, and furthermore apply to the entire planet.
Show me a country where democracy properly, comprehensively works to constrain the cancerous appetite of capitalists. Yes, some European countries, or Japan, or some other outliers do it much better, but not well enough for their capitalism to be fundamentally different from that of the US.
Furthermore, on a global scale, the trends of increased energy demand, increased population growth, increased carbon emissions, increased ecosystem degradation, to name just a few of the main actors, are overwhelmingly positive, and hence these are global issues that affect the sustainability of the entire global society. It doesn't matter if Luxembourg or Sweden develops a socdem utilitarian net-zero utopia, they are just a blip on the radar. All of the third-world countries that the first world exploited over the past centuries in order to fuel their own growth & industrialism, are now all fervently contributing to the above trends, to the extent that all of the first world's recent attempts to dial back their consumption, emissions & so on are laughably futile.
The only possible way to change these trajectories would be for the majority of capitalists across the world to all drastically alter their priorities and how they operate, but this is absolutely not plausible, given that no corporation has ever not sought to increase its profit and growth every quarter; if they somehow hypothetically did, the system would come tumbling down and simply cause collapse even sooner.
Switzerland, Denmark, Sweden and Norway are actually doing pretty great.
They all have a good standard of living, healthcare, not much crime, and the state helps you with social services if you mess up. This is quite different from the late-stage capitalism that is currently going on in the US...
Apple is the company that talks about being virtuous. You can even say that they are even sanctimonious. Just look at Tim Cook's speeches on virtue, privacy and tracking. Yet their actions, based on their own framework, here are the opposite of their own wording.
Just that LP has rules like you can't get physical with them and hit them, you can't use a weapon, if they have a weapon let them go. You are not a police officer or security officer, you are a store employee or contractor that is disposable should an incident happen.
>>Apple and SIS have a qualified law enforcement privilege that allows them to err in store security-related accusations and not be sued for it.
This pretty much sums up modern days Apple. In terms of above the law, being righteous. And it wasn't a honest mistake or one-off incidents. They have been doing it repeatedly. It just reeks of hypocrisy.
They then put out a "BOLO" with the innocent man's name, had him arrested, figured out they have the wrong guy, do nothing about it and had him arrested again.
When Bah appeared in court to answer charges and Apple was asked to present the video that would have cleared Bah, Apple claimed that the video had been deleted. The video was later found by Bah's attorney during discovery. This happened more than once. A warrant was issued for Bah’s arrest and contained the photo of the imposter. Bah was arrested even though the he doesn't resemble the imposter, other than being Black. Prosecution against Bah continued in multiple states through June 2019.
Hmmm, seems like a lot of people involved in this case should be fired, including the Apple Employees, SIS Employees and Police officers, reguardless of whatever other outcomes there are.