Hacker News new | past | comments | ask | show | jobs | submit login
European Union Calls for Five Year Strict Ban on Facial Recognition Technology (techgrabyte.com)
208 points by Anon84 on Jan 23, 2020 | hide | past | favorite | 120 comments



The EU hasn't "called for" anything. A discussion paper was leaked.

We ban sites that ridiculously overplay stories like this. Please submit from more reliable sources.


I love coming to these threads to watch the crowd who makes their living from invading the public's privacy attempt to rationalize their worldview, find loopholes, etc. If your job is mass surviellance, it has always been unethical and the law is catching up to you. The purpose of these kinds of laws isn't to bring your business in line - it's to put you out of business. We're coming for you.


The purpose of a law designed in the interest of individual privacy, yes, but at least in America companies get a huge amount of say on what gets passed because they vote with their profits. The reality is that most of the laws are bad because they're considerate of private profit accumulation when, in reality, that profit directly harms the privacy of individuals. We continue to pretend as a society that these two desires can be reconciled.


I have no problem if a company performs facial recognition on me as long as I give them permission and it is used for specific purposes. Why should the government interfere with my ability to decide how images of my face are used?


Why should a company be allowed to perform wide scale facial recognition, analysis, and use as its training data the images of people who never consented to that use just so that it can determine whether the face it has in front of it is your face that you gave them consent to use?


Why are people uploading photos of others who never gave their consent to have their image online in the first place?


That's a good question. Maybe we should focus on prosecuting everyone who publishes photos of others without their consent? Or not. But that's a question that's completely unrelated to the question of consenting to facial recognition.


Disrespect? Ignorance? They're huge fans of surveillance capitalism? The more GDPR rules and further privacy regulations such as this help prevent this practice, the better.

Consent should not be assumed and should be revokable if ever given. Yes means yes -- everything else means no.


because companies are lying to you about how your data is being used


So have the government explain the risks of sharing personal data with corporations and then punish companies after the fact rather than preemptively.

I don’t understand why I shouldn’t be able to make an adult decision about taking risks with my own personal data.


Just like with a lot of things we have rules against, it's not just your personal data. Encoded in photos of you is data about your family, too. And in public, your surroundings, people nearby you, your location history, etc is all in the realm of these photo systems. There is more data in your photo than just you - this is a societal safety issue.

Allowing a free-for-all in all new tech that has clearly defined privacy issues at scale can cause a lot of runaway problems. A society who tries their best to reduce harm caused by bad actors using wild new technology that has huge privacy implications is a reasonable society to me.

Automated detection of your face matters to the rest of society, too. It's not just your privacy that is affected, it is other people's too.


A society that issues blanket bans on new technologies instead of specific problem behaviors is one that ends up falling behind


Then make lying about it illegal, not the technology itself.


How can a company know that you have given them consent before they perform the facial recognition?


Do the facial recognition, then throw out that data if it turns out the face in question isn't in the whitelist of consenting users.

Computers aren't like humans. They have the ability to permanently and irreversibly forget certain information, and they always do exactly what they're programmed to. A local, on-device facial recognition programmed to immediately discard information on unrecognized faces has exactly zero privacy impact. (See also: smart speakers listening for a hotword.)


> A local, on-device facial recognition programmed to immediately discard information on unrecognized faces has exactly zero privacy impact. (See also: smart speakers listening for a hotword.)

This just isn't true, though. Hotword detection fails frequently for me with Siri, Google Home, and Alexa. I haven't tried the others. They all pick up unrelated sounds and record them as if the hotword was spoken when it definitely was not. Not even a similar sounding set of words.

There are absolutely privacy implications for technology that does automated scanning and recognition of its environment. They are buggy, as all programs are, and they are hackable, as all programs are.

Being both potentially buggy, potentially hackable, and constantly alert and running algorithms on their environment is absolutely a valid and real privacy concern.


False positives.

(See also: smart speakers listening for a hotword.)


Which is, ironically, a problem that is solvable by improving the accuracy of voice/facial recognition technology.


Ironically how? Facial recognition isn't banned. Research isn't banned.

Another problem is that the risk of abuse is too great, even if the accuracy was flawless.


I find it ironic because that's one particular privacy problem where better facial recognition technology is the solution, not the cause.

If the accuracy were flawless then there would be zero false positives, and the aforementioned issue wouldn't exist at all.


You can't seriously believe that better facial recognition technology will result in better privacy - in any imaginable future. We live in a universe where Facebook and Google exist and people in adtech can justify their actions. Yet perfect facial recognition will lead to better privacy?

Oh, and it is hardly the only issue.


You're taking my statement out of context. Better facial recognition would fix the one particular privacy issue that you brought up. (False positives.) I've said nothing about adtech, Google, Facebook, or any of the other topics you're just now bringing into this discussion.


Fair enough, I just don't find that solving one edge-case at the same time as creating countless new major issues is particularly helpful.


Is the mechanism of consent relevant here?


How can it not be? We are talking about public spaces here.


You sign a contract, TOS, ect?


In a public space?


If you can't get the permission, then you don't have it - so yes, they can't simply use it on unsuspecting people in a public space.


Who do you work for?


[flagged]


Actually these ethics, in a democracy are determined by the majority of people via their elected representatives.


Not necessarily. Not going into political arguments, but just because the majority agrees with something, does not make it ethical.

Edit: I do agree with the sentiment of your post though. That is, to counter the idea that “ethics” depends on whatever view of the world an individual has.


Exactly, majority of the people in many of the states in the US believes gay marriage to be unethical.

I mean, not long ago, these same majority of people wouldn’t have voted for African americans to go to the same schools as white children.

And that’s all within this current generation


Seems like same thing is happening with facial recognition.


Furthermore, in democracies, people are regularly convinced to support all kinds of things they don't agree with or even understand.


By that logic collecting data in America is ethical, because it's not illegal in America


That's not really how Ethics works.

Hanging homosexuals is ethically wrong even if it fit popular support. Slavery is ethically wrong regardless of what people vote for.

How we judge past decisions is more complex then only the ethics, including knowledge, context, norms, etc.

but the ethics can be pretty clear

(can be because they can also not be clear, but they can certainly be clear.)


Ethics is a field with a long, rich history of study and many-threaded tapestries of thought. It's certainly not "whatever you want it to be", although there are schools that do what you suggest - picking the end point they wish to arrive at and and trying to reverse engineer the ethics that would lead to it.


These sort of decisions should not be made by large centralized agencies like EU. I would rather see these decisions being left to the individual countries and even better to local bodies.

May be London wants to take a different approach than Finland. That should be fine. Let each countries figure out what works for them.


Yeah as an American I don't get the EU. Sovereignty is worth more than convenience in trade.


As an American you don't see the benefit of a federal government of states for interstate commerce?


The federal government is the sovereign nation of the United States. We don't answer to a higher bureaucracy than that and we have direct influence over leadership in the Federal government.

The states themselves also retain substantial powers. They can block a Constitutional amendment or call a Constitutional convention, they can appeal to courts with 250 years of common law precedent backing them up, etc. There is much more clarity in the American system.

The EU doesn't work the same way.


And being a small country in gigantic bloc in a world of superpower real-politik is worth more than soverignty


I mean, many nations disagree with you. South Korea and Japan haven't joined forces against China. Norway and Switzerland remain out of the EU. And plenty of other counter-examples exist. The country of Singapore is not much bigger than a city. At least you have control over your own destiny as a sovereign nation and you have more agility with your economy.


The states under the Articles of Confederation felt that way, until they pursued federalization under the Constitution. Germany, Italy, even Britain used to be made of smaller parts that sought union. Countries occasionally choose to federate into a more powerful government. How is the EU any different, other than the process is more long-drawn?


I am not an American. Also, EU is anything but convenience in trade.

-It had been a peaceful night in Europe where all the women are strong, the men are good looking, and the children are above average. Martin woke up on his EU regulated bed and looked through his EU regulated window. This night, Martin had slept like a baby thanks to the 109 EU regulations concerning pillows, the 5 EU regulations concerning pillow cases, and the 50 EU laws regulating duvets and sheets. Martin went to brush his teeth with his toothbrush regulated by 31 EU laws.

After that, our EU-regulated man went to his EU regulated kitchen to grab a Class 1 EU regulated apple. For the benefit of society, the EU had defined what a “class 1” fruit actually is: to class a "Red Variety" apple as "class 1" then 50% of its surface must be red. To class a "Mixed red coloring variety" of apple as a "class 1" apple 33% of its surface must be red, and so it goes for the 3 quality classes and 287 individually named apple varieties. Martin ate fruits and vegetables because the government told him it is the right thing to do. He switched the TV on and listened to the “eat five fruits and vegetables” government ad with attention. Martin’s apple was not very tasty, but at least it was controlled by a European central authority. “Isn’t that great,” thought Martin, “the EU takes care of our food. We now can eat only nice and safe products!” Martin is paying 40% more for his food because of the EU’s highly protectionist agricultural policies, but it is the price one has to pay for civilization.

---


Copypasta is not an argument.


We existed fine up until now without pervasive facial recognition, but I find it difficult to conceive a future where this technology won't participate in some part of civic life. China is plowing ahead with smart city and rapidly developing surveillance state standards. US is not far behind. Europe is just ceding 5 years for illiberal forces to set standards, normalize behaviour and cultivate acceptance when it should be providing an alternate model - which can't be nothing because these businesses with go on with or without her.


Here is what's happening in the UK.

> https://www.youtube.com/watch?v=0oJqJkfTdAg

All I can hope for, is that some activist group takes the police to court and the legislative branch reacts to then impose rules for spying on its citizens.

Some of the things I can think off the top of my head:

1) Citizens can legally opt out by putting on face masks. Especially when it's cold.

2) Video / Images are stored outside of government bodies and akin to a black box. Must require warrants to review footage.

3) Video / Images / Data are deleted after 1 year.

4) No data of citizens facial features, body structure, gate are transferred into a national database.

Honestly though, where the UK is going. I firmly believe in 20 years all citizens physical meta-data will be tracked and stored in a black box somewhere and then later leaked on-line.

1984 isn't just a book. It's a handbook by all accounts.


By many metrics the UK is and has been the most surveilled country in the world for decades.


the metrics that include the privately owned CCTV cameras used by shopkeepers in the same category as those operated by governments?

the UK has issues with creeping authoritarianism in an number of areas, but millions of shops having crappy 2FPS black and white CCTV isn't a particular concern of mine


The whole argument that they're private cameras is rather pedantic and ultimately pointless.

It doesn't matter who owns what as long as the government can request access to the data.


the concern amongst non-hysterical people is mass surveillance by government, not ad-hoc requests by the police for VHS tapes of people stealing packets of bacon

in this situation: the fact they're not controlled by, or accessible to the state without effort is neither pedantic or irrelevant, sorry.


After the amount of leaks and revelations in the last decade you're only embarrassing yourself if you claim that any people concerned about privacy are hysterical. Whoever is not worried is simply clueless about the power of data.

The fact that there is some extra effort required to get to the tapes does make it a bit inconvenient, but that won't do you any good if you're caught by those surveillance cameras in the wrong place at the wrong time. The investigators will almost always make that effort, since it's part of the job.


I get what you’re saying but it’s not just the fact that the whole country is covered in CCTV (private or otherwise, and btw I find private cameras to be more dystopian frankly because you can’t audit them.) There has always been a sentiment in UK government similar to America that justifies an all-seeing entity (GCHQ) who’s motives are implicitly good and thus they are beyond having to answer to the public. The secrets act itself is something that makes the UK even worse than America in this way.


> 1) Citizens can legally opt out by putting on face masks.

You don't have (yet) some beautiful law like we have in France?

https://beta.legifrance.gouv.fr/loda/texte_lc/JORFTEXT000022...

> Nul ne peut, dans l'espace public, porter une tenue destinée à dissimuler son visage.

Noboby may, in public space, wear an outfit intended to hide his face.

Clear and simple, I guess. That was directed towards radical Muslims but the definition encompasses everyone. And last year we got this extra one:

https://www.legifrance.gouv.fr/affichCodeArticle.do?idArticl...

It is about hiding all (or part) of your face, within (or near...) a demonstration, in which troubles arose (or might have arisen...). The first law was an infraction, this one is a misdemeanour with much harder sentence.


We have that law. That’s why balaclavas are hard to find in the UK these days. Nobody can legally use them in public.


The algorithms are public, the data is public... even today, you can often feed a portrait image to yandex's image search (the best public one out there) and get back pictures of the same exact person. It seems to have some special case for faces.

And it's by no means certain that you and I can't do better. I think the reason Yandex image search is better than google's, is that Yandex entered the game later and thus could incorporate better methods from the start. There have been important advances in extreme classification even in 2019.

I think the best we can hope for is that this power of identification isn't exclusive to governments and police, but can be used by us as well. So that there aren't more Bob Lamberts than necessary.


Here's an idea... Take down the cameras!


They'll use drones then. Well, they already do, in addition to fixed and mobile ground cameras.


Fresh in today: https://www.bbc.co.uk/news/uk-51237665

"Met Police to deploy facial recognition cameras"


For 1, Body shape/bone estimations and gait can't be easily masks.


I can't read the link as its down but here's an article that does a good job of explaining the leaked EU report [0].

> As for facial recognition, the Commission document highlights provisions from the EU’s General Data Protection Regulation, which give citizens “the right not to be subject of a decision based solely on automated processing, including profiling.”

It's more nuanced than "a ban on facial recognition". The leaked white paper is also available [1]

[0] https://www.euractiv.com/section/digital/news/leak-commissio...

[1] https://www.euractiv.com/wp-content/uploads/sites/2/2020/01/...


So let's say you're the police and you want to keep an eye out for bad guys, and you have a bunch of CCTVs. According to the rule you quoted, it would be sufficient to surface the face match to someone, who would manually check that it is the right person before going ahead and arresting them? That person could even be the police themselves? I bet people think this is banning a lot more behavior than it is.


The rule I quoted is a summary of the position and not to be taken as legal framework on decision making. The rules are mulled over in the white paper and it would take some time to digest it before making a comment on how the rule could potentially work in the future. The current process is for the white paper to be published and then call for opinions on comments.


https://www.schneier.com/essays/archives/2020/01/were_bannin...

These bans have good intentions, but won't solve the actual root of the issues, and does more harm than good.


> Regulating this system means addressing all three steps of the process. A ban on facial recognition won't make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

It sounds like the ban needs to be broader, and extend to ban any technology used to automatically identify individuals without their consent using surveillance sensors.


yes, license plate readers, wifi/bluetooth scanners/loggers, TPMS loggers, facial recognition, etc... all fall in the same category.


Yeah. Facial recognition by itself is fine, but let them be in hot water if they use it for identifying others.

It's similar to how GDPR and similar laws operate. Collecting the data is fine, but you'll get in trouble if you process & distribute that data without clearly notifying your users.


Who else are they going to identify, themselves?


I don't know what you're trying to say. We're talking about corporate/state surveillance and privacy violations here.


Maybe they're talking about something like Face ID on an iPhone? You can use it to identify yourself.


Schneier doesn't seem to make the point that they do harm in that article. He also specifically closes by extending the bans on facial recognition with a warning against focusing on facial recognition too much. Which is of course correct. The specific critique about smartphone tracking as a fallback wouldn't be valid in the geopolitical context of this thread, which is specifically Europe. In the US that's of course still the case.


The obvious harm in outright bans (I know this specific case is only banning it in public areas) are that it stifles innovation where the technologies are used to empower society.

The reference to smartphone tracking before the one I believe you're referencing in that article details the myriad other of ways to track people: financial transactions, license plates, MAC addresses, heartbeat, gait, etc. You can substitute those in for the fallbacks.


> The obvious harm in outright bans (I know this specific case is only banning it in public areas) are that it stifles innovation where the technologies are used to empower society.

Fair enough but I've just asked this somebody in another thread here: What innovation? This seems completely hypothetical, facial recognition is neither hard, nor complex. Neither is image clustering and other backend stuff required to use this for malicious purposes. If innovation means use cases like clearview I don't really care. Surely the privacy of millions should be valued higher than a few startups with smart photo album technology or whatever that will likely fail anyway for completely untechnological reasons. The EU seems to be, quite surprisingly, relatively specific with this call for a ban.

> The reference to smartphone tracking before the one I believe you're referencing in that article details the myriad other of ways to track people; financial transactions, license plates, MAC addresses, heartbeat, gait, etc.

Don't see how that changes my argument, in the EU none of those are treated like they might be in the US. Especially after the introduction of GDPR that's already mostly illegal tracking, no matter the specific technological implementation, and should satistfy the critique Schneier brings in his article.


> What innovation?

This is a question that possibly can't be answered if it's illegal to experiment with it from the get-go. Off the top of my head, it could be used for thought-provoking art installations; perhaps one that utilizes facial recognition to derive the viewer's emotions and let the procedural art change accordingly.

> that's already mostly illegal tracking

Yes, GDPR's a great step in the right direction. It's stopping a lot of the corporate surveillance. Right now the EU needs to keep an eye that their governments are also subjected to the same level of restrictions, which are currently not if its ends are for "national security" or similar reasons.


The art use case is a nice one I haven't considered. But that sounds like something that's easy to cover either in the definition of public spaces or a flat out exception like there no doubt will be for research.

I fully concur w.r.t. applying the same restrictions to European governments, i.e. this sentence from the article: "These include the German government is planning to roll out facial recognition technology [...] after a successful trial in Berlin." The "successful" trial published an anonymized evaluation, results were horrid. It's quite amazing how split the political field must be here. In Germany that's also down to law enforcement exemptions from quite a few privacy laws, no "national security" pretense needed.


I don't see how it isn't a fantastic start.

It signals intent.


Romania is part of the EU. Officially there is no facial recognition system in use, but there are a few that are not official and those will not be affected by the ban because "they don't exist".

A relative was the lead policeman in a case where a person A accused a person B of a very serious crime. Person B appeared on a camera in the area, police identified him instantly based on the national database of ID cards (it contains photos of every person over the age of 14) and arrested the guy. Luckily for him, he was covered by a different camera for the whole time he was accused for the crime, but that was discovered after he spend a day in arrest. The person that make the false accusations simply walked.

There is a database of pictures of people that get an ID card (which is mandatory, not having one with you at all time grants you a fine). Another one for passports, but those are optional. Another one for driver's license (having a driver license with you does not save you from a fine if you don't have also the ID). These are databases everyone knows about and police are using every day, no authorization is required.


I don't understand how you can ban a machine from doing something that a human can do.

Consider the following scenarios:

- I can pay for a person to watch archival video and take notes on paper as to who comes in and out of frame.

- I can have software that helps a person crop faces of people coming in and the person can tag and catalog the people coming in and out of frame.

- I can have software that identifies human looking things and things that look like faces and a person can tag those

- I can have software that identifies human looking things and things and recommends a similar face. A human confirms.

- I can have software do everything.

At which point does it become facial recognition? The end results are the same regardless of which step you ban. So is any ban just meant to make the cost artificially high? I think you could outsource it anyway via Mechanical Turk or something similar if there is a real value to facial recognition.

I don't think banning technology is the answer.


The difference is scale. With ML Face recognition we can identify people in real time, without some safeguards we very rapidly reach a point where we can track anyone anywhere. Imagine if every camera in the city of London could identify you, forget metadata we will be able to actively track individuals taking public transit across a city and identify every individual who the interact with.


Wasn't that the point?

You can't seriously expect to build a system like big brother and be so naive as to think this wouldn't be result sooner or later.


Yes I agree, I just think that you can build a system to keep a token person in the loop that will drive the cost down regardless. It's similar to how regulators tried to ban "computerized trading" and traders literally built a machine hand that presses the enter key.

Making the cost artificially high by banning a particular type of technology is not addressing the real problem of civil liberties.


"Making the cost artificially high by banning a particular type of technology is not addressing the real problem of civil liberties."

I think it helps.


> I don't understand how you can ban a machine from doing something that a human can do.

We do it all the time.

- Phone autodialers are legal, but particular uses are fairly heavily regulated (in theory, at least).

- Heavy machinery use is regulated in a variety of ways.

- LEOs in the US recently were told GPS bugs (which "just" do what humans can do - taking notes on where a human goes) require warrants.

- Explosives dig much faster than humans. These are heavily regulated.

- Most radios sold in the US will refuse to tune certain bands for legal reasons, even though it is trivial to modify some of them to do so.

Etc.

As far as your slippery slope argument, it simply doesn't matter. The point is to reduce the harm done by a given tech, not achieve some sort of abstract purity of thought.


None of these steps involves looking up a face in a seventeen million page binder of faces with names and associating names, government ids, traffic violations, credit scores, etc with the person.

I don't think people are worried about computer vision identifying Human vs Not Human as much as they are about the (not humanly possible) pinpointing and following of every move.


> I don't understand how you can ban a machine from doing something that a human can do.

Quantity has a quality all its own. Something that is acceptable at a small scale may have unacceptable consequences when done at much larger scales.


Same reason you might want place more restrictions on machine guns than on muskets.

The reason one would ban a machine from doing what a human can do is that it's stupendously faster, and enables all sorts of dystopian effects that manual face-tagging doesn't.


The difference is that machine-driven facial recognition can be collated with other data to create unwanted results for society.

Rando Startup can start scanning faces in public. Then it starts matching those faces with actual identities. Then it adds in the cell phone location data to determine which ones visit synagogues regularly. So it adds a little notation to the data table about each of these people. Perhaps a yellow star will do nicely.

If you know anything about recent European history, you will understand why this is a bad thing.


The point is to prevent a reality where some people can track the whereabouts of all (or nearly all) people living in a city in an efficient way. You could of course try automating this using humans, but the cost will indeed be much higher, the efficiency much lower, and even this process itself might be banned if it were to lead to the same result.


That's actually a good extreme showing the difference. Tracking literally everyone all the time would require a huge number of police if done manually. And with that many police, you'd need police to track the police because they are people too. It gets ridiculous fast. With tech, it actually becomes feasible.


Part of the problem with facial recognition is it is potentially inaccurate, especially if you are not a white male.

So you run the risk of having large numbers of innocent people being incorrectly tagged as persona non grata - be that with the police, in-store detectives, hotels, potential employers etc.

Having a computer say "this person is bad/unwelcome/a shoplifter/bad credit/a sex offender/etc" is powerful, and difficult for laymen to counteract. Computer says no - sorry, it's policy, nothing I can do.

And you can do this at huge scale for pennies and in the blink of an eye with a computer. You can't do that with a human manually/semi-manually doing it.


white male white male white male


> Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

https://www.washingtonpost.com/technology/2019/12/19/federal...


It doesn't matter. "White male" is a slur now.


Well, making the cost artificially high can be a point in itself. Sometimes the concern about a technology is the ubiquity and scaling.

But aside from that, to address your questions:

1. If there's a human in the loop, the human can be accountable. Explainability and accountability of algorithmic decision-making is an emerging concern, especially to the extent that algorithms may encode bias in their training data.

2. Real-time systems. Being real-time qualitatively changes the impact this can have on your day-to-day life. Taking the human out of the loop makes many applications feasible that otherwise wouldn't be. Like doors.


Other commenters have made the point that scale / quantity matters. Another example: Formula 1 has limits on computational performance. Humans can design aerodynamics. Machines can do it better, faster, more exhaustively. The latter is quantitatively limited.


Scale does matter. Consider how your gut-feel attitude toward a thief that have stolen $1 vs a thief that have stolen millions.


> with exceptions for research and security projects

Isn't the whole point of facial recognition security projects


Well no, tracking for targeted sales is a big use case I guess.


It would make more sense to ban tracking for targeted sales no? As this won't stop people using other methods like Bluetooth and wifi fingerprinting.


That's where EU GDPR comes into the frame with informed consent for tracking.


Yeah but the consequences for targetted sales with a as product is poorly aimed marketing. The potential consequences for security are far more severe.


The consequences for targetted sales is that the data is in the hands of companies which make it available for sale by anybody who is interested. Including scammers, stalkers, jealous ex partners, enemies in general who want to find the weakness in someone's life to discredit them, spies, etc.

That's pretty bad IMO, regardless of what you think about the use case for law enforcement.


Prudent approach, gives time to define more finely how, when and what it is used for etc.

After all, seems a fair few laws are a loophole away from enabling more crime than the law prevents. So a step back like this, does so in balance towards the consumer/people over government/business.

But like many things, there will always be exceptions and those that will take exception to them, which is fair as that is how democracy works - equal voice and often it has taught us that whilst today their may be a small child at the back questioning things, there may be more tomorrow and the next day. Showing that all questions need answers, this is a good start in enabling that. Will the public engage and have their say heard, or will the EU pull for pubic say and what balance will play out. We will know over the years and be great to see how far that goes 5 years from now, once the ban has ended, or been extended.


Really the "public places" part of the definition betrays a fundamental informational illiteracy in that they don't acknowledge the separation of the data from where it is gathered.

Just a few silly demonstrative edge cases:

- If I run facial recognition software on a public camera feed on a computer in private would it be legal? - If in Europe in public and running facial recognition software on my laptop processing say my own personal family photo album would I violate the law? - Would running it on news footage of a public street be legal? What about if it was an interview where you would get thrown out if you tried to enter.


Don't see how it does or how the edge cases are elevant. It is there to prevent anyone from thinking public spaces are free game.

It's also a very different thing to process an image and permanently mounting a camera in a public space. Something that already requires a license in many/most jurisdictions, and getting one is hardly trivial.


As a private citizen, would I be allowed to set up a camera on my front porch, point it at the sidewalk, and store all the data it takes in, including counts of how many times each "unique" person walks by, etc. Maybe even link it to my facebook, or a public directory and link all their public info to a profile and keep all the data on my servers? Does this chance if I am a business?


I don’t know about the rest of the EU but in Belgium you cannot point it at the sidewalk. You can only film your own property, not a public space.


How does this law allow for taking pictures in or of public spaces? Are you just now allowed to use facial recognition tech on personal photos of public places? How does Facebook get around this? Or do they just ignore the law?



I wonder in the future if we will look back and see intelligence agency's in the same light as religious institutions from history. Both using there "all knowing" information to influence power.


[flagged]


They're considering banning the usage of facial recognition in public places, not banning it in general.

I'm fairly sure what's prompting this is the kind of surveillance state that China has built in e.g. Xinjiang [1], not stuff like face unlock on your iPhone.

[1] https://www.nytimes.com/2019/04/14/technology/china-surveill...


Technological progress? Might be. Social regression, Police state, sure 100% Don't count on me to be part of this. I'll give the finger to every company who offers me to be part of this wonderful endeavour.


I'm really curious as to what technological progress this argument is supposed to refer to. You could build a great system surveilling the masses using cluster and image processing tech we had 15 years ago, what's changed is mostly throwing more hardware and money at something. Research seems to be specifically excluded. The EU job market wont go under because a few legitimate use cases might have to adapt, while blatant privacy violations can be put on hold until more complex legislation can be worked out.


Detection of who's at your door/doorbell was one good case I'd heard of that people were concerned would be impacted by this.


Yes, surely if we cant track everyone's movements everywhere at all times, our civilization will crumble into dust.


GDPR is great for coverups of official corruption because corrupt officials can reliably destroy the evidence in the cloud if they get tipped off that an investigation is starting.

Bans on facial recognition means prosecutors of corrupt officials have to rely more on eyewitness testimony and eyewitnesses can be intimidated and have "accidents".


Any disruptive technology should be preemptively locked before public use. Times of "move fast and break things" are gone for good.


How is it legally possible to ban computation over a set of data? Better yet, how do you prevent indie developers from doing the same computation on their personal machines?

If you ban the tech, then you prevent research on it as well. If you ban the use, you can not catch violators.


Anything's legally possible if you're the one writing the law.

As for whether it's enforceable, that's another question entirely. Historically, bans on the creation, use, and/or distribution of software haven't done so well in that department. (See export ciphers, piracy, DRM.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: