Hacker News new | past | comments | ask | show | jobs | submit login
EFF and Eight Other Privacy Orgs Back Out of NTIA Face Recognition Talks (eff.org)
131 points by jdp23 on June 16, 2015 | hide | past | favorite | 73 comments



Hmm, an alternative strategy might be to start an EFF program to photograph and add to a facial recognition database every law enforcement officer. Have volunteers take pictures of people showing up at the police academy, and people in uniform doing their job, and people going into and out of employee only entrances to law enforcement facilities. They can make this data base for DefCon 'spot the fed' sessions and for other interested parties.

That will certainly keep the conversation going although in a less civilized way. Perhaps it could bring enough pressure to bare on NTIA to get them to reconsider basic privacy safeguards.


I would volunteer my time to that program.

What really bothers me is that nobody outside of our bubble will take this seriously until someone hacks in and downloads a copy of the database and posts it online for all to see. Only then will most people realize what's wrong with having your face in that database despite the absence of a conviction.


I would volunteer my time but honestly, I'd expect extremely hostile reactions and quick and quiet legislation at the city level to make it stop.


I'm on board. It's all fun and games until it's turned back on them.


I believe the EFF and the other consumer groups are grandstanding to get some publicity in this case. I have listened to almost all of these sessions that the NTIA facilitated, and the company I work for even presented at one of them last year. I think it's important to get some points out which I haven't seen in any of the other comments:

1. The process led by the NTIA, unlike the bulk of the EFF's post, has nothing to do with the government's collection and use of facial recognition data. This process was designed to come up with a voluntary guideline that commercial facial recognition companies could choose to adhere to or not. I agree that limits need to be set for government uses, but this process from the beginning was never about that. The NTIA did not have jurisdiction for anything involving the federal government, law enforcement, military, etc. This is more akin to the voluntary privacy standards that companies on the web will agree to on their own. Once they do, if they violate their own policy they are subject to sanction by the FTC. The guidelines coming out of this process were intended to be voluntary, so companies would not have to comply if they didn't want to. However, the idea was that the working group would come up with something that the industry would volunteer for and by doing so give some level of protection to consumers.

2. The industry representation was not opposed to opt-in on some scenarios, but they were opposed to mandatory positive opt-in across the board without exception. There are a number of scenarios where opt-in doesn't make sense. For example, imagine a system that used facial recognition in a retail setting to detect shoplifters and alert management to their presence. If you had been caught stealing in the store in the past, you would be enrolled into the system by the retailer. Each person entering the store (as seen on their already existing surveillance systems) would be checked against that database of shoplifters. How realistic would it be for the retailer to get written opt-in consent from the criminal?

3. Previous sessions the consumer advocates had agreed that facial detection was out of scope, but facial recognition and facial analysis were in scope for the guideline. At the last session, they changed their position and wanted to bring face detection in scope. The difference being, face detection is just finding a face in an image, like your smartphone does in the camera app vs face recognition which would be finding the identity of a face in a picture (Who is this? or is this Bob?). Can you imagine getting opt-in consent to draw this little yellow box around the faces in the camera app?

4. The NTIA did an outstanding job in an attempt to bring this process together. They organized the sessions and brought in numerous speakers from both sides (consumer and industry) as well as from the education and research sectors. We even had representation from Canada that spoke to a similar process they went through there. The NTIA wasn't forcing any standards on anyone, they simply offered their services to organize and facilitate it as they have done in other similar multi-stakeholder initiatives. The process went on far longer than the NTIA (and any of the participants) had expected, but it was making slow progress.

I don't know if this process is now dead or not. I suspect it will go on, just now the consumer groups won't have a voice in shaping this (by their own choice), which is a shame. It's also possible that the show of force by the consumer advocates will push the industry to bend more in their favor, either way it would be good for consumers to get some kind of guideline out there. It could always evolve and be revisited in the future as the technology changes.


Facial recognition software provides a probability that a record and a image is of the same person and that process will without exception create false positives. A anti-shoplifter system would thus have to have several safety features in order to not turn into a no-fly list.

How would that system work in a fair society government by law? Would suspects not found guilty in a court of law be there? How should false positives, that is people who the computer incorrectly predict as someone else, get a chance to clear their name (face) and get compensation when it happens? Should criminals that have served their time still be locked-out for-life and denied entry to stores that share the system?

Like vigilantes, I have a hard time seeing a way which such system can be created without massive abuse. Let assume that 1% of scanned people will be false positives, and say 5% of records will be falsely added by clerks (they are not trained to be cops or judges, and some will abuse the system to intentional hurt people they know). How many people would be hurt by this system if it was shared by the majority of stores in a large town?


>>The industry representation was not opposed to opt-in on some scenarios, but they were opposed to mandatory positive opt-in across the board without exception. There are a number of scenarios where opt-in doesn't make sense. For example, imagine a system that used facial recognition in a retail setting to detect shoplifters and alert management to their presence. If you had been caught stealing in the store in the past, you would be enrolled into the system by the retailer. Each person entering the store (as seen on their already existing surveillance systems) would be checked against that database of shoplifters. How realistic would it be for the retailer to get written opt-in consent from the criminal?

I agree with the EFF is opposing this type of system, Retailers should not be allowed to "enroll" people in a Scarlet Letter system. The Credit System is already fucked up enough with very limited recourse for people that abuse it to extort consumers.

Retailers should not have that kind of power...


The faceprint shoplifting example is, when duly considered, a frightening example straight from dystopian cyberpunk -- individuals automatically banned from jobs, stores, private facilities, public spaces -- forever, automatically, under an opaque corporate controlled consumer identification / credit system.

Your employer and industry are so far on the wrong side of the ethical divide that there's simply no way to reconcile ethical standards and what you see as justified non-opt-in faceprint collection.


The process led by the NTIA, unlike the bulk of the EFF's post, has nothing to do with the government's collection and use of facial recognition data.

If you're calling out the EFF's hypothetical wost-case like this, you certainly wouldn't use the hypothetical best-case of shoplifter recognition to provide argumentative cover for other no-opt-in products like your company's IRL analytics offering, would you?

The guidelines coming out of this process were intended to be voluntary, so companies would not have to comply if they didn't want to.

If they're voluntary, what is the purpose of complying with them? Why would anyone in industry adopt the standards? Seriously. What good will they do? Why didn't businesses voluntarily walk away from the table first? Should I believe they're honestly negotiating to voluntarily give up on existing/potential products to make the EFF happy?

What reason at all do I have to believe the industry isn't just seeking a set of milquetoast "consumer protections" they can make a big deal out of living up to, while not providing any real consumer protection?

I don't know if this process is now dead or not. I suspect it will go on, just now the consumer groups won't have a voice in shaping this (by their own choice), which is a shame.

So the industry couldn't steamroll the consumer protection groups and get a set of flimsy set of standards. And now it's those group's fault when consumers aren't protected?

Shame on you for pretending like they're choosing not to have their voice heard. It was heard, then industry saw money and decided to ignore them.

either way it would be good for consumers to get some kind of guideline out there

No, if the guidelines are weak they're worth less than nothing. The industry will have a standard they can "live up to" without impacting their business model. Then when consumers complain, they'll wave around the standards as if the boundaries of acceptable use of facial recognition are already settled and protect consumers.

tl;dr lulz @ "The NTIA did an outstanding job in an attempt to bring this process together"


> We even had representation from Canada that spoke to a similar process they went through there.

I'd like to learn more about the process Canada went through. This is one of the top search results, is this it?

This is the Privacy Impact Assessment Report from Passport Canada. http://www.ppt.gc.ca/publications/facial.aspx?lang=eng


This brings up something I've been wondering about for a while, the legality of masks and other devices for identity protection. Supposedly due to protests and anon activity I have heard of an increasingly push to make publicly wearing masks illegal, but I think it's a slippery slope to go down. If in a few years time I have to assume that if I'm in public my face is being input into recognition systems, why should I not have the right to wear a mask or something similar?

Of course the knee-jerk argument is that wearing masks (both digitally in offline) creates a different personality which is more willing to engage in illegal activity, but for me, a staunch Constitutionalist, that still doesn't justify making it illegal.

This is, at the bottom line, about the reduction of anonymity equally in the real world as in the digital.

One more tool of control in the belt of the oligarchy.


I'm a researcher at a face recognition company. A big pair of dark sunglasses still does a pretty good job of obscuring your identity (although obviously not as good as a mask). Adding a hat with a brim pulled low helps out even more.


Don't forget that NIR cameras (often used for iris recognition) can still see through those glasses. But i agree, stuff similar to the challenges in the AR face dataset [1] will usually stump most employed systems.

[1] http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html


True although the commercial deployment of NIR face recognition isn't that extensive (I'm not saying it doesn't exist however). I know for our company's software that academics have been trying to use it with NIR images even though our software wasn't trained at all for it.

I do think that NIR will eventually become more important (especially for car applications) and really the main stumbling block is massive sets of training data for it.


Yea the NIR wavelength makes the skin lose a lot of discrimination ability. Don't know how familiar you are with the FOCS, SCFace, or PolyU datasets, but they're a good start for training (of course very few papers use them, they'd rather gain an extra 0.1% on LFW).


What about mirror shades?


> This brings up something I've been wondering about for a while, the legality of masks and other devices for identity protection.

Canada just recently criminalised the act of wearing a mask at a protest. 10 years maximum sentence. For wearing a mask. Seriously! (Nice democracy they've got goin' up there.)


In my town, street performers cannot wear masks. It was originally created as a law to help arrest KKK demonstrators here in the south. But yeah, I wouldn't get too high-falutin' at the expense of Canadian democracies. Granted, we did drop the case against Spider-Man...

http://www.knoxnews.com/news/knoxville-drops-spider-mans-mas...


What about viruses?


CV Dazzle [1] was an art project for confusing and evading facial recognition software algorithms, inspired by the WWI / WWII camouflage designs [2] ~

[1] http://cvdazzle.com

[2] https://en.wikipedia.org/wiki/Dazzle_camouflage


First, if it is illegal then we should work at getting the law changed.

Wear a surgical mask, that seems acceptable in this day and age.


>if it is illegal then we should work at getting the law changed

It seems like most of the privacy issues at hand today are already against the law -- the problem is enforcement.


What about antivirus/pathogen masks?


The problem is by then it might be too late to fight it. They'll get so used to identifying everyone in real time at all times, that someone wearing a mask or trying to hide from the "system" in any way will look very suspicious and will need further tracking through other means. And the government from the police to the FBI will get very used to it because "how could ever catch the child kidnappers without that ability?!"


> The problem is by then it might be too late to fight it.

It is never too late to fight something. Everything changes with time. It might be too late for an easy victory but if you study how things change and evolve over decades you'll see that's just a self-fulfilling prophecy. Many things that people say are "inevitable" have failed and many existing trends get reversed. For a couple of recent technological ones just look at the failing of the "inevitable" SOPA, the Comcast and Time Warner merger, and net neutrality actually becoming the regulation instead of being destroyed.

If you believe you can change something you often can, if you don't believe you can change it then you definitely won't.


Isn't it better to work on having a government you can trust rather than fooling yourself by building a castle made of sand in the path of an incoming tide?


Why not both?

We should retain the right to conceal our identity in reasonable ways and governments should enact laws that protect their citizens privacy from biometric recognition technologies.


>Why not both?

Reality. Information leaks all over the place, trying to legislate your way to privacy is like trying to bail out a boat with a fishing net. Public information is by definition public, so we could all walk around with wide brimmed hats and masks or we could be realistic about the challenges ahead.


What? With that attitude do you not believe in regulating anything? No laws are perfect but that's not a valid argument against having them.

There's a big difference in a company legally operating a facial tracking system and a company illegally operating one.

In the case of the illegal one they have to hide it, it's much harder to profit from and thus would be less widespread. In the case of the legal one they can freely advertise their services and easily setup their system anywhere that local governments don't create hurdles.


You can make all the foolish laws you like and there will still be data and cameras everywhere in public places. Where the data comes from is irrelevant, it will be available.

Edit: Are you going to ban cameras in public places? Facebook? Cellphones? Satellites?


Due to the very nature of government, I don't believe it is possible to have a government that I completely trust.

There are different degrees of trust though, and I would like one with a higher degree of trust earned.


Your fooling yourself if you think you can ever trust government....


What does that even look like? It seems like a stretch to believe that we can create a government that we'll always be able to trust.


A government I can trust can get voted out of office tomorrow.


True. Also one man's trusted gov is another's opposition. Accountability would have been a better word.


> Communities such as San Diego, California are using mobile biometric readers to take pictures of people on the street or in their homes and immediately identify them and enroll them in face recognition databases.

In their homes? I'm pretty sure this is illegal. Or is it not if they are visible from a public area?


So imagine a scenario where a delivery company outfits their delivery people with body cameras "for safety" (a legitimate concern) but then sells access to their feeds to a biometrics company as an additional revenue stream and lets them extract data from that video..

They know the address being delivered to, and now they can map the presence of specific biometric data to that location and time.


Doesn't matter. In their homes, or in their gardens, or when they are going out and have one foot on the street.

Illegal collection can't reliably be prevented. Get a photograph, extract the biometric data, enter them in your database, delete the photograph. Job done.

Maybe the only way to get control back on this is to require that these databases to be open source (or publicly available) and make it illegal to hide those databases. Citizens must also have the right to be removed from those databases.


At what point do we stop trying to prevent unpreventable illegal activity, and try to put systems in place for dealing with its undeniable existence?

I'm not certain how such systems would look in this case (everybody wears Guy Fawkes masks in public?), but this debate reminds me of prohibition vs. harm reduction in the War on [People Using] Drugs.


I don't think the comparison holds. It's not like there're thousands of generations of face recognition fighting against a wrong-headed prohibition; this sort of panopticon garbage is a clear violation of societal norms. I don't think you accept that misbehavior from artificial persons is a given; even as we look for a technical solution, we should also be pressuring the legislatures to amend laws to prevent this.


Does the "Right to be Forgotten" cover the datacenter neurologic?


Then you will still need a reliable auditing system to audit what they're deploying is what's in the open source repository. The government would solve that by creating the "Open Source Auditing Department" or somesuch. And at that point, who trusts them anyway?


I trust democratic governments and the justice system a bit more than companies. The Bill Of Rights (or equivalents in other countries) is a much nicer read than any EULA.


Correct - they have an expectation of privacy in their homes. Same deal applies if you use a creeper telephoto lens.


That's good to know.

Just out of curiosity, do you know how that interacts with indecency laws? Like for example if someone is having sex in their home but someone is able to see it from the sidewalk through a crack in the curtains? This is purely hypothetical of course.


IANAL.

My understanding is that it comes down to a "reasonable person" doctrine - i.e. would a reasonable person consider the location to be private.

Another way of thinking about it is if there is a reasonable chance that you will be seen by people engaging in normal everyday activities.

Personally, I would say that peeking through a small gap in the curtains would not qualify as normal behavior. Someone doing that is clearly snooping.

On the other end of the spectrum, having sex in front of a window that fronts a public sidewalk with no curtains would pretty clearly be public indecency.


It depends on a LOT of things, jurisdiction being the biggest part, so there's no universal answer, but cases like this should be indicative:

http://www.washingtonpost.com/wp-dyn/content/article/2010/04...


Interesting case, thanks for sharing.


Who are the companies that are pushing this? The article, surprisingly, doesn't say.


Yes, and?

With OpenCV, I already have face detection, face recognition, eye detection, cascade creation (for detection of any feature I wish), a retinal model in conjunction with retinal scanning via webcam, and other power tools in that bag.

And it's all open source. It's easy enough, I made it myself: https://github.com/jwcrawley/uWHo


This isn't about face recognition technology, it's about face recognition data. The EFF and friends want to come up with standards for storing, securing and sharing face-recognition data between companies and government entities. These standards make sense—and need some minimum level of scrutiny—regardless of how easy facial recognition technology is to implement.

Writing software that parses Social Security numbers into lists is even easier than face recognition, and yet we still want standards around handling the information!

Okay, that was a slightly facetious example, but the point stands: it's a question of securing data, not technology.


Who cares if it's easy or not? Consumer/passerby protection rules are not going to stop you, a person with nothing to lose. This is about the use of such technology in a commercial setting.

It's a bit like my pirate radio station. I have it, I broadcast to a very limited distance, whatever, I don't really worry about the white vans. If I put a 400 kW TV transmitter on top of a mountain, that's a very different story.


Lets take the same analogy...

So, if I buy/build a 1GP camera array, and drive it around the city, capturing all the data, "that's a very different story"?

Your example is in violation of the FCC. Mine violates nothing.


Your example is in violation of the FCC. Mine violates nothing.

Well, yeah, that was kinda the idea. My point is that it doesn't matter what is technically possible if consumer protection rules were to come into place. The regulatory environment would curtail the largest (ab)uses of the technology as entities that would use the technology wouldn't want to take the risk.

edit: So, under hypothetical regulation on facial recognition, a watchdog agency isn't going to know/care about your driveway camera, but your business based on using that 1GP array to sell person tracking and demographic data would raise some eyebrows.

Was your point about the ease of implementation an expression that you'd do it anyway if the regulations came into place? Put your business on the line because you personally know how to implement it?


That it's easy to do is all the more reason to try to establish controls to prevent abuse.

It's trivially easy to sell customer data for profit at the expense of their privacy. It's easy to store personal information without securing it. It's easy to take credit cards without using SSL certs.

It's also easy to lock someone up and deny them due process, or to enter someone's home and seize their effects without a warrant, or to set absurdly high bail.

Etc. Again, it's because this is such an easy thing to do–and it's only going to become easier to collect, store, and analyze this data-that it makes sense to establish definitions of what constitutes fair use, and what constitutes abuse, and to do so sooner, rather than later.


So, what is the responsibility for capturing faces for recognition purposes when in a public setting? It's already established that members of the public can record events like arrests legally, thanks to 1st amendment defense.

But above is in public. What about "my" property? Or the supermarket? We already have cameras everywhere. Just look up. So what does it matter that they run software on the back end? The hardware is already there.

Walking in gives implicit permission that "you agree to be recorded or leave". Vandalization of cameras is criminal. Many places insist you not wear masks, or they call the cops.


The difference to me is that when your presence was merely recorded on video, it required the manual labor of humans to search for your face in the recording. This isn't something that is likely to happen unless authorities are looking for you in particular for some actual reason.

Now with it all automated, it just takes running some scripts on a computer system to locate your face from any number of sources, so people who are not particularly under suspicion for anything will have their whereabouts identified automatically just as if they were actual suspects.

Same story with automatic license plates readers. Manual searching for a license plate is so time-consuming that it's unlikely to get done without cause. With automatic systems, records can be kept of every car that passes through a monitored intersection, whether if the car or driver is of actual interest or not.


That particular arbitrary line in the sand seems mighty.. well.. arbitrary. The data is being collected either way, but it only becomes objectionable because a computer is involved? On what grounds?


I think it's a pretty big difference, going from labor-intensive manual inspection of data to fully automated inspection of data, opening up everyone within few of the camera to being automatically tracked.

It's not the same thing that's always been done, except now it's being done with a computer. It's the same thing that's always been done, except now it's being done for everyone, rather than for just a few select individuals of interest. It doesn't really matter that a computer is involved; it's the scope of the surveillance that is disconcerting. If huge numbers of humans were hired to track everyone manually, that would also be disconcerting.

Who cares? I'm not entirely sure. And I guess that's sort of the point. This level of surveillance has, in my opinion, surpassed what most of society is ready for. I don't think that most people have a good grasp on what's going on, or what impact it will / could have on their lives.

If arbitrary companies and government agencies can know my whereabouts at all times, should this have any impact to how I live my life? It's a question that society at large has never had to really answer before.


Every Law and regulation is arbitrary, why do you believe this should be any different?

At some point we as society say "this is the line, you can go no further"

Communities create standards for behavior that are arbitrary all the time...


Are you seriously implying that you see no difference between "thou shalt not kill" and "thou shalt not analyze images of public places with anything more complicated than an abacus" ??


Nice Strawman, but murder is not the only other law on the books

We have regulations for how tall your grass shall be, what color you can paint your home, how many dogs you are allowed to have etc etc etc etc.... 1000's of aribirary rules and regulations for society.

For businesses there are even more, things like handicapped parking, bathrooms, depending on the type of business the hours or days you are allowed to be open, etc etc etc 10's of thousands of rules arbitraly defined as to what a business can and can not do

This is more akin to regulations around either medical records, or finical records both have regulations around how the data can be stored, and how the business can use it.


No strawman - you're the one that said "any regulation is arbitrary" after all :)


The public is better off with reasonable restrictions on the mass collection of data. It is a utilitarian solution. What grounds are there to require wearing a seatbelt when driving a car?


I agree with you that they're a private company so they can do what they want with their business–I actually think private entities don't have enough rights on that front in many regards–but that doesn't mean that it isn't a conversation worth having, and or that they aren't unregulated entities–there are legal limits to what stores can do in their relationships with customers, how is this any different?

And walking in doesn't give implicit permission that I agree for them to store, analyze, sell, leak, publish or otherwise use that data however they please, in perpetuity. Are they allowed to disclose my presence to law enforcement without a warrant? Can they tell my health insurance provider that I spent an unusually long amount of time in the dessert aisle?

I'm sure a lawyer could argue (probably quite successfully) otherwise, but from a let's-address-the-problem-before-it's-fully-mature standpoint, these are the kinds of conversations we're supposed to be having right now.


> I agree with you that they're a private company so they can do what they want with their business–I actually think private entities don't have enough rights on that front in many regards–but that doesn't mean that it isn't a conversation worth having, and or that they aren't unregulated entities–there are legal limits to what stores can do in their relationships with customers, how is this any different?

Well, they are people. The only distinction is they have no voting rights in the elections.

> And walking in doesn't give implicit permission that I agree for them to store, analyze, sell, leak, publish or otherwise use that data however they please, in perpetuity.

The pharmacy counter is guarded by HIPAA and the payment done with credit card or debit card is protected by agreements from PCI DSS. The last catch-all is whatever the company's privacy policy is. No privacy policy = no FTC violation of privacy policy.

Further south of where I live, there's a sex toy shop with the following: http://www.covenanteyes.com/2009/08/26/truckers-pictures-tak...

The website's down, but they are still there, taking videos and photos of all who arrive. Completely legal.

But that's why I wrote uWHo. It's not hard. And I did so to push the envelope. People have chided me, and all I can say is look here: https://maps.google.com/locationhistory/b/0 and https://maps.google.com/locationhistory/b/0

> Are they allowed to disclose my presence to law enforcement without a warrant? Can they tell my health insurance provider that I spent an unusually long amount of time in the dessert aisle?

Can you do those things?

If so, why can't they?


> a retinal model in conjunction with retinal scanning via webcam

You probably mean iris. The retina is the back of the eye, very few if any real biometric systems use this. The iris is the colored area of the eye that surrounds the pupil. It's very texture rich and less invasive to obtain.


Acccording to the documentation, OpenCV contrib indicates a retinal model.

http://docs.opencv.org/modules/contrib/doc/retina/index.html


If you actually read the page, you'll see it's something different.


<grumble>

My understanding was that it could back-calculate the retinal map.

Alas, I was indeed wrong.

For those who don't mess with this OpenCV, what this does is bring the colorspace and attributes of an image to something that the eye would see.

camera picture + retinal filter = simulation of how the eye would see the picture captured by camera.


Hah, I was about to get mad at myself for not realizing retina biometrics were built in to OpenCV.

As someone who has worked a bit using OpenCV to detect eye features, I can say pretty confidently that it isn't THAT easy. OpenCV can help you start off by easily being able to detect that there is an eye in frame, but I think you are on your own beyond that.

My general approach was to use haarcascades to detect that an eye was in frame. Then you have to isolate the iris, and pupil. You can use Hough Circles or the Daugman algorithm for segmentation. I got a reasonable result using the Daugman algorithm, but Hough circles seemed to be more erratic. I had a huge problem with reflections though, which I think it is known that the Daugman algorithm suffers from. You pretty much have to take the picture of the eye in a controlled setting where you flash a light in the subjects eye such that the reflection in the eye is a small isolated circle. Even then I wasn't always to get a correct result. I think I may have not implemented the algorithm perfectly though.

I never got farther than that, but even if you are able to capture the eye perfectly you then have to actually build up a model of the persons features and then be able to compare it.

I would be interested if anyone else had better luck than I did.


GFK_of_xmaspast is right, it appears to be an image processing technique not biometric


Actually, you can change the dimensions of your face. It is very painful, and I do not recommend it to anyone voluntarily.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: