Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Artificial Intelligence Act: MEPs adopt law (europa.eu)
62 points by ericb on March 13, 2024 | hide | past | favorite | 59 comments



How do we prevent humans from wholly relying on/trusting AI in systems that the world's stability relies on.

That to me is the biggest 'unacceptable risk'. A human putting a system we don't fully understand in charge of a critical process.

Article says all high risk systems will be 'assessed' before being put to market. There's a limit to how well you can understand the side effects before deployment.


> How do we prevent humans from wholly relying on/trusting AI in systems that the world's stability relies on.

By being transparent and disclosing how the AI was trained and what content was generated by AI. That removes the magic, once people understand how simple these things are and how easy it is to tune the results to make the AI say whatever you want people will stop trusting them quickly.

And that is part of what these regulations are about. Transparency instead of marketing buzzwords.


I like to be very blunt with people who don't know anything about this stuff by just saying "AI doesn't exist." There is nothing intelligent about "AI," and people need to know that.


Ever since ChatGPT has become mainstream, I've talked about it to my non-tech friends as “simulated intelligence” instead of “artificial ingelligence” to emphasize that it is not intelligence, it is just simulating intelligence, and sometimes the simulation is realistic and sometimes it's far off.


This seems like a very slippery concept. You could say the same thing about many human beings. That they simply regurgitate opinions to fit into a certain role and aren't actually utilizing intelligence when making decisions.

It's almost as if there is some sort of Platonic ideal of problem solving that we call intelligence, something you can strive for but also someone can poke holes in.

Why not simply use an IQ test? The various multimodal systems coming down the pipe will either improve on taking IQ tests, or stall. If they improve, then they gain more intelligence. If they stall, then they aren't truly intelligent. Just glorified goal to action mappers.


> You could say the same thing about many human beings.

Humans are, by the mere definition of the word, “intelligent”. AI are doing a simulation of that intelligence, the same way a weather model is a simulation of the weather whereas the actual weather isn't a simulation of anything. That doesn't make weather models useless, far from that, but if you trust them blindly when hiking in the mountains, you may end up in a bad place.

> Why not simply use an IQ test?

IQ tests are very limited measurements of what intelligence actually is. Using a thermometer isn't a good way to access the trustfulness of a weather model.


>Humans are, by the mere definition of the word, “intelligent”.

Humans are absolutely not intelligent by definition, that's why words like "unintelligent", "dumb" and "stupid" exist.


You're using adjectives that are relative between humans.

The dumbest human is still far more intelligent than any other animal on earth, by construction of our own definition of intelligence.


There have been many definitions of intelligence, some of which have been constructed such that they had only specific races[0] be classified as intelligent, while others (the slaves) were classified as unintelligent.

Of the tests/definitions that separate humans from non-human animals, the three I grew up with were language, tool use, and the mirror test. AI can definitely do the first two — LLMs know far more languages, natural and programming, than I do — and I don't know either way about the third for reasons related to why I am also dubious about how effective it is really.

IMO those tests are not great, and we don't have a meaningful way to put humans and dolphins on the same scale for the same reason you can't meaningfully put a 747 and a Harley Davidson on the same scale for vehicle goodness.

[0] never mind that "race" isn't a real thing


I agree with all you're saying.

My point is just that we explicitly define intelligence as things humans excel at and other beings don't (and you are right to point out that in the − not so distant − past, the distinctions wasn't so much humans vs other beings but white aristocracy vs other humans).

And we've been doing the same with AI for a while: beating a chess grand master would have been considered an unambiguous proof of intelligence 50 years ago, now I don't think anyone would argue Stockfish is actually intelligent.


> That they simply regurgitate opinions to fit into a certain role and aren't actually utilizing intelligence when making decisions.

Today I realised there's a superficial similarity between mode collapse and confirmation bias. Anyone know if it's just superficial, or if there's a deeper connection?

> Why not simply use an IQ test? The various multimodal systems coming down the pipe will either improve on taking IQ tests, or stall. If they improve, then they gain more intelligence. If they stall, then they aren't truly intelligent. Just glorified goal to action mappers.

It's unclear how useful IQ tests are; there are entirely functional human communities whose members on average perform as badly on some IQ tests as people who need a full-time caretaker because they lack them mental capacity to get dressed or prepare breakfast without assistance. Change the test, those communities suddenly look smart and it's us who look dumb.

The tests have been improved since then, but we've also got problems in the opposite direction — all of the informal measures of intelligence I had growing up are skills where existing AI is now somewhere between a good student[0] (law, medicine, advanced maths, programming) and wildly superhuman (chess skill, languages spoken, arithmetic[1]).

[0] emphasis on "student": you wouldn't want to be treated or represented by a fresh graduate if you could afford better, would you? Nevertheless, if someone told you they had a law degree and a medical degree, this would be considered a sign they were very smart.

[1] when computers were new, they were called "electronic brains"… or rather, when digital computers were new, as the name "computer" was taken from a job that used to be performed by humans.


What about unintelligent humans is not an argument against stating that there is not intelligence in "artificial intelligence".


I on the other hand like to retort with two quotes:

"The question of whether a computer can think is no more interesting than whether a submarine can swim." - Edsger Dijkstra

"Deep Blue was intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better." ― Garry Kasparov

I don't care if a statistical model feels anything when analysing pictures of weird lumps to diagnose cancer, I care if it's accurate.

I don't care if a statistical model has hopes and dreams when generating the text of a computer program one textual token at a time, I care if this is risking my future employment.

I don't care if a statistical model has created the same abstraction of the economy that I have when giving me financial advice, I care that it's got a high rate of return at low risk.

I don't care if a statistical model wants to win at chess, or at Go, or at Diplomacy, or at poker, or at paperclip maximisation…

I will care if someone figures out how to do brain uploads, but that's a question for the future.


They're not going to disclose the data because it would be self incriminating.


As long as it's properly tested in terms of out-of-training testing data with sufficient statistics, I'd welcome AI decision making over human any day. When you have humans in charge you know for sure that you have a substantially biased decision maker with completely unknown error levels.


You prefer your decision making authority to be corporate one?


Critical decision making systems are present across industry and also government today. I'm not interested in discussing the balance of industry vs government control of our lives. Suffice to say, if a decision is being made that affects me I'd like it to be as accurate and bias free as possible, regardless of the organization making it.


Think about what you are saying here:

> I'm not interested in discussing the balance of industry vs government control of our lives.

With you, some level of corporate or government control is inevitable - it is assumed in your answer. This control is (immorally) inflicted on everyone.

I want no extra governmental or corporate strictures. Where's my option? There is none. That's democracy! With laws served up to you by unrelated bureaucrats.


* Unelected, not unrelated!


What choice is there?


I'm saying, if you start inviting greater governmental control into your life, you don't need to be a genius to work out what happens - you will be governed more and you will lose personal freedom.


Today we trust humans to manage critical processes.

In the future we will need to trust humans to not run those processes with unreliable tools.

This isn't just about AI. How do we trust the humans who manage critical processes to follow good security practices (keeping systems patched, etc)?


Do you trust Boeing management or safety right now? Memes about MBAs aside, they're not using AI yet still to most people making bad decisions. Eg this problem already exists. It could be exacerbated by people willing to let ai take the wheel


What critical processes are you thinking about here?


> Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

* Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children

* Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics

* Biometric identification and categorisation of people

* Real-time and remote biometric identification systems, such as facial recognition

That seems... reasonable? I know regulation is a trigger word for some folks around here, but I think preventing a Chinese-style social credit system seems like a positive thing?


From an admittedly brief look (while interesting it has no direct immediate relevance to me), I can't tell if this applies universally regardless of scale, or if there is any sort of gradient. Does anyone know? Some of it seems reasonable period at any scale, but some of it would be worrisome or strike me as unreasonable/negative if it applied to individuals as opposed to larger network where emergent effects happen. To take a personal example, I selfhost Blue Iris and am using some AI recognition in my home security system, which I also use for wildlife monitoring. It's really awesome vs the old plain motion or even motion zones, and as there is zero exposure to the internet, nor even any view of public property at all (end of 1/3 mile driveway with woods). It is useful to be able to classify known (me, family, regular neighbors) vs unknown people as well. I think the societal harm potential of an isolated system like that vs the individual benefit is low, and thus I'd lean towards skepticism if that too fell under their banner of "Biometric identification and categorisation of people" or "Real-time and remote biometric identification systems, such as facial recognition".

But of course in a public setting, and/or networked with lots of others with enough scale to start effectively following people through space and time not just capturing one private slice, the metrics all change. So as I watch various AI efforts I'm always thinking about how they address that (or not). I don't think "profit" is even the deciding thing here, something can not be commercial but still potentially a problem at enough scale.


Social scoring is an interesting topic. E.g. why is it not banned if you don't use AI? I'd guess that most social scoring systems today are not actually AI based. Perhaps they have some components that come from AI, but it's easy enough to make one with simpler mechanisms.

Also, I assume insurance risk scoring is not banned, even though that is already almost a social scoring system. E.g. you live in a dangerous poor neighbourhood, higher insurance! You are a given age, higher insurance!


How do you find a meaningful definition of “simpler” mechanisms? / How do you differentiate ML from AI? (Many view ML as a subtype of AI.)


These are excellent questions. I was using AI in the seemingly typical way people use to today, to refer to large neural network systems.

But yes, part of the issue around these laws and indeed the whole discussion is a lack of nuance about what exactly people even mean when they say AI. If I were to switch to my computer science context, then yes, I view ML as a subset of AI, an even view logic systems and rules systems as a subset of AI too. I doubt that the average person is thinking of PCA or even word embeddings when they say "AI" today, and that's a huge problem.

Many of the risks these AI regulation laws are worried about apply equally well to both simpler than LLM systems, and even apply to basic statistics and data aggregating. In my opinion risk mitigation laws should not be specific to AI, but be problem focused instead.

Similarly, "AI training bans" for say web content would also be far improved with more nuance. Many laypeople argue that banning AI training doesn't ban web indexing. But based on my professional training I would not personally implement even a basic tf-idf system on any data that has "AI training banned" as tf-idf is a basic AI system according to the strict definition.

We'd do far better being more explicit, and banning perhaps "generative AI systems" or "AI systems that are capable of generating substantial subsets of the content." It wouldn't be perfect but it would be a hell of a lot better than the trajectory we're on today.


I appreciate your comments. / I agree that the mainstream understanding is, as would be expected, too often an ignorant jumbled mess. But this is to be expected. I'm reading and ingesting as much as I can on AI safety from many angles. At this point, I don't have a "take" on the status of the legal clarity here. / Happy to continue the conversation as well.


It all comes down to the details. A meme sharing site that uses a computerized recommendation engine to recomend memes to users might be considered "cognitive behavioral manipulation".


Yeah it will be an EU style credit score.


I don't know what I think about this, probably it's not great. I'm afraid this will hinder smaller developers and enable larger corporations to get ahead just because extra legalize.

Also, why is some stuff unacceptable but acceptable for the government? That seems a bit like censorship to me. The EU is really becoming a large organization that just does random shit that people don't really ask for.

My government still publishes my personal data online for anyone to see, no one seems to care about that. Rules for thee but not for me.


I get the good intentions, but I don't like that the restrictions may be too broad and may hinder useful innovation.

For example: I once got trapped against the door in a building with a fire alert. The door was badge controlled and a several panicked people were pressing me against the door. I would loved a surveillance system to detect scared people and decide to unlock the doors automatically in an emergency.

We should punish bad usages, not ban broad categories just because politicians lack creativity.


You don't necessarily need AI to create a door that opens during a fire alert. You could simply make it so that the door is no longer badge controlled if a fire alert goes off.

Sure, it's possible someone might exploit this by creating a fake fire alert to open the door, but they could also trick the AI you suggested by using a fake panicked expression, maybe get some friends in on the prank.

Kind of a segue into why I think AI hype and crypto hype are uncannily similar. You could argue that blockchains have a lot of potential uses, it's just that we already have some kind of legacy technology that works fine and is usually much cheaper. In much the same way, I'd say 90% of people are using LLMs as search engines, albeit search engines that are more expensive and return strange results.

Anyway, point being, 99% of the applications one could imagine getting sniped by this law probably can probably be implemented using alternative, non-AI technologies.


>> Designing the model to prevent it from generating illegal content

Impossible.

>> That is why it requires that national authorities provide companies with a testing environment that simulates conditions close to the real world.

Who defines "close to real world"?


according to your quote, national authorities.


Interested to see what comes of this... "Publishing summaries of copyrighted data used for training"


Everybody who has released a decent model has used a shadow library dump for training. Are they going to admit that they illegally downloaded more than 20 million books?


That is what half of the 7 trillion dollars are for...


I heard a really interesting take somewhere recently (I believe it was on the verge's decoder podcast which is excellent), that a lot of the people in these organizations think this issue is a money problem and not a "fuck you fuck off forever keep your fucking hands off my data" problem, which could actually derail the entire genai industry as they run into more legal hurdles.


There’s zero reason to found a AI business in the EU when the same opportunity exists in the US. How long until the likes of Mistral escape?


Yes there is. I've lived in both the EU and the US for decades. The American lifestyle sucks. Don't underestimate that.


Didn't the mistral founders work in the US and move back precisely to found a company in france?


EU is much cheaper, that is a good reason, you can still sell in USA even if you develop the AI in Europe. Deepmind still operators from Europe.


People are currently lobbying in the US to set up much worse legislation.


This really seems to just be an extension of GDPR.

Mostly just don't use AI to discriminate or manipulate people.

quite light touch and Not at all as bad as I was expecting.


We need to act on AI now. We need to limit it's ubiquity, we cannot allow it to take ANYONE's job. We are heading towards absolute doom if we allow this. Humanity will be doomed.


I hate to break it to you but AI has been ubiquitous and taking people's jobs for decades now. [0] is an interesting article on the subject. The cat's out of the bag, and humanity isn't doomed yet (at least not due to AI). I'm not convinced.

[0]: https://sitn.hms.harvard.edu/flash/2017/history-artificial-i...


Technophobia is a constant surprise to me, especially here. But then, I'm weirdly far in the direction of embracing change — it took me until my 30s to learn about the Chesterton's Fence version of conservatism, and through that example that there was any form of conservatism which had merit.

Looking it up, I'm surprised how common technophobia is, 85-90%[0]; I should be more mindful of this.

I was going to say something about Marx welcoming this, but this time I found the issues of industrialisation and who benefits from the investments went further back than I previously thought: https://en.wikipedia.org/wiki/Protection_of_Stocking_Frames,...

[0] https://web.archive.org/web/20080511165100/http://www.learni...


The linked article is old and out of date, nor has the act gone into effect yet (it has legality checks and other processes to go through first). The actual adoption notice is https://www.europarl.europa.eu/news/en/press-room/20240308IP...

Edit: The link has since been updated :)


Proposed: The laws were drafted in the age of narrow AI and have little relevance in the age of general AI (LLMs, etc).


I think these parts are relevant, it will be very nice to get this for LLM and image generators:

> Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

> Disclosing that the content was generated by AI

> Designing the model to prevent it from generating illegal content

> Publishing summaries of copyrighted data used for training

This means that now if an article was generated by LLM they are legally required to disclose that.


Those last two are just going to make llms harder to use and act dumber. Annoying but not the end of the world. I have no idea how disclosure can possibly be enforced though. There's just no realistic way to distinguish a reasonably good image generation and a photoshop job.


Critiques:

1. "AI generated content": Is it generated by LLMs if LLMs were used in any way? e.g., getting ideas? If you use a single word? Sentence? Paragraph? If a person has edited things down?

2. "Prevent it from generating illegal content": It is illegal to libel. An autocorrect can generate libel.

3. "summaries of copyrighted data": and if it used GPT4 as the evaluator?


> Disclosing that the content was generated by AI

Hmm, hadn't thought of that angle before. So, like, if one uses a copilot to generate code, does it now have to be watermarked somehow?


Censorship.


What. AI systems and LLM's do not have the same rights as humans do. In Europe corporations are thankfully not considered people like in the USA.


Premature legislation misses the target. Gdpr targeted personal data collection, a relatively anodyne adverse effect of social media, but missed addiction which is the real damaging externality. I wonder what they miss this time




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: