Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

According to the EU proposal, this is what they mean by AI:

(a)Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

(b)Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

(c)Statistical approaches, Bayesian estimation, search and optimization methods.

So rule-based systems (b) are considered AI for the sake of regulation.



So every program ever made?


That's the idea, yes.

It is not really about the techniques used but about what you are using these programs for. If law enforcement used, say, the average brightness of a picture of someone's face as a basis for suspicion, it could hardly be called AI, but it is nevertheless a highly concerning usage of technology and shouldn't be given a pass just because it is not advanced enough.


I'm sorry but I cannot see how to view this charitably. To me it clearly seems to be a way to enact further unjustified regulation on something that would otherwise be hard to create public support for, by using big new scary words like AI, even though it has massive overreach into more traditional programs.


If you tell people the computer program that profiled them as a probable scumbag and denied them credit or a job was using methods from the 1980s instead of the newest neural nets, that won't change how they feel about it. Objecting to regulation of such systems because you think the term "AI" should be reserved for the newest and trendiest systems very "HN" and misses the forest for the trees. The EU is rightly concerned about the ways that computers empowered to make important decisions might impact people's lives might perpetrate injustice; that's what is important here. Not some whiny semantic nitpicks about what to call those programs.


If you tell people it was a program using methods from the 1980s instead of the newest neural nets, it likely would change how they feel about the newest neural nets. Clarity is the only way to avoid cutting off our nose to spite our face.


But don't you think that there should be regulation saying that delegating decisions to a technology system should make you fully responsible, and that growing capability of these systems make it more important to clarify that any "mistrusted the machine by accident" argument is null and void?

For me, it's not important if the legal definition requires this term. It's not like automation and computer programs are being banned.

It's the decision part that justifies this legal definition IMO.

It's like the difference between a human or an automated car running over people. Quite literally even.


> But don't you think that there should be regulation saying that delegating decisions to a technology system

What is a "decision"? My computer is making billions of decisions per second.

If nothing else, I could see this crippling FOSS. No one is going to want to be "responsible" to some EU bureaucrat pursuing a personal political agenda, certainly not for work you've done for free.


Yeah I wanted to expand but got tired:

Of course any computation can be considered a decision in some way. But this is conflating a narrow and IT-specific meaning of that word with legalese and philosophy, or my intended meaning of the word.

This reminds me of the ancient catastrophic Therac-25 software bug, and other such cases.

Maybe this case is a good example for thinking about what part of the responsibility is on the side of the operator, apart from the obvious failure of the implementer.

For more modern examples involving actual ML, look at Meta, TikTok and their recommendations: where is the line to draw, what excuses are allowed, when the outcome is obviously negative, and the algorithm claims to fulfill a goal?

It doesn't matter if it's a rule-based system without "intelligence" or ML.

What matters is the responsibility of the human operator.

And making assumptions about correctness.

Humans make egregious errors as well, but the kind of errors AI causes are a significant concern where current legislation is insufficient.

It's one thing to have a bug in your airplane controller code or whatever.

It's another to knowingly accept malicious errors, either unpredictable or even intentional, but without proper responsibility?


> It's another to knowingly accept malicious errors, either unpredictable or even intentional, but without proper responsibility?

Is this law making the users of software responsible (e.g. law enforcement, government departments) ? It seems to me to make authors of software responsible, absolving the users. Not the other way around.

They keep the law secret, of course, so anyone can claim what they want, but the article talks directly about the accountability of OpenAI. It seems to focus on rules on authors of AI software, presumably to mostly absolve governments and users of that software. "Rules around generative AI", "transparency requirements for any developer of a large language model" ... nothing that would make governments responsible for abusing AI software. Which is strange, because that's the concern the last paragraph of the article focuses on.

I must say ... this "complete ban" voted by the EU parliament last spring, doesn't seem to have stopped governments from using live facial recognition [1].

[1] https://www.euronews.com/next/2023/02/21/new-french-facial-r...


> It seems to me to make authors of software responsible, absolving the users.

Only the authors of foundational models.

> They keep the law secret, of course

You mean this secret law: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52... that people have actually read https://softwarecrisis.dev/letters/the-truth-about-the-eu-ac... ?


> Developers (not deployers) of foundation models need to register their models, with documentation, prior to making it available on the market or as a service.

No, thanks. Utterly insane, we are sliding backwards into the "code is a munitions export" territory that I so hoped we'd escaped after the 90s.

> If you take a foundation model, fine-tune it for a specialised purpose, and deploy it as a part of your software, it won’t count as a foundation model, and you’ll probably be fine, as long as the original provider of the foundation model was compliant.

This interpretation would render the whole exercise pointless, there is quite a blurry line between "fine-tuning" and "training".


> > Developers (not deployers) of foundation models need to register their models, with documentation, prior to making it available on the market or as a service.

> No, thanks. Utterly insane, we are sliding backwards into the "code is a munitions export" territory that I so hoped we'd escaped after the 90s.

I'm not sure if that's what the legislation is about though? I hoped for it to be more about legal responsibility for the results of operating AI services, considering they pose difficult legal challenges, at least I suppose so, regarding responsibility.

I admit that I have not studied the law in detail, and in case it wasn't clear, I am Not A Lawyer.

If the law instead is about regulating the release of model weights, I will understand this and it would of course disappoint me.


> If the law instead is about regulating the release of model weights, I will understand this and it would of course disappoint me.

Of course it's not about that. It's pure unadulterated FUD. The law is about many things:

https://softwarecrisis.dev/letters/the-truth-about-the-eu-ac...


> Utterly insane, we are sliding backwards into the "code is a munitions export"

Of course not. This goes hand in hand with such things as "being accountable". You can't spit out a black box that no one knows what it does and how it was trained? Too bad for you.

Perhaps this will lead to fewer things like this: https://news.ycombinator.com/item?id=38595751


> You can't spit out a black box that no one knows what it does and how it was trained?

What if I want to? Who on earth has the ethical authority to claim the right to control my ability to release a set of weights? Only what is done with those weights should be legislated, and extremely conservatively.

Maybe a lot of software developers got into this career for the money, or because they like solving problems. But for me, politics inseparable from software engineering. Politics are why I devoted myself to the craft. This is the exact kind of situation in which my knowledge and skill become tools of protest.


I don't think this law is meant to stop these things at all. It's meant to make sure that companies and governments who do this and use models as an excuse go free and have someone to blame if they get caught.


> Who on earth has the ethical authority to claim the right to control my ability

Do you also complain about so many other things that "unethically curb your abilities"?

> Only what is done with those weights should be legislated, and extremely conservatively.

Ah yes, let's regulate this black box with no insight into what it does, and only guess at its possible outcome. Whatever can go wrong?

Oh, we know what can go wrong, because we've had multiple issues with algorithms going wrong.


> Do you also complain about so many other things that "unethically curb your abilities"?

You curiously omitted the last part of the sentence: "...to release a set of weights". Please don't pretend that I was speaking about anything else, and don't overgeneralize my statements; that becomes a straw man argument. Believe it or not, some laws are unethical. I am happy to provide examples.

It's a case-by-case basis which involves evaluating the overall impact on human rights for all parties involved.

The ideal scenario is one where all rights are preserved under good faith, and publishing/owning models is treated no different than any other software project, while the actual use of such software continues to be subject to existing laws. In this case, can strengthen consumer rights without weakening developer rights.

> Ah yes, let's regulate this black box with no insight into what it does, and only guess at its possible outcome. Whatever can go wrong?

I specifically said we should not be legislating weights, so I'm confused about which point you are trying to make. Weights are the black box. Company policy, employee behavior, and business logic are not, and are accessible for scrutiny by the courts if needed. So no, let's not regulate the existence of software, which sets an incredibly dark precedent for digital sovereignty.


> while the actual use of such software continues to be subject to existing laws. In this case, can strengthen consumer rights without weakening developer rights.

Emphasis mine

--- start quote ---

ANNEX IV

TECHNICAL DOCUMENTATION referred to in Article 11(1)

...

2. A detailed description of the elements of the AI system and of the process for its development, including:

...

where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);

--- end quote ---

So let's see Article 11(1)

--- start quote ---

The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date.

--- end quote ---

Conclusion: all you're doing is spreading FUD.


I've made no claims as to the nature of the law, I've only asked questions about particulars and responded to the answers. If anyone is spreading FUD, it isn't me.


Nobody cares about your CMS, online shop, code collaboration tool, etc.

If your software however makes life-altering decisions for its users - such as targeting people for investigation, deciding on asylum requests, etc. - then you have to be responsible for the decisions your software makes. You can't hide behind "I don't know how the AI works, but it decided that person X is likely a criminal" - nope, you need to be able to explain the reasoning process behind this, because the system is your responsibility.


> If nothing else, I could see this crippling FOSS.

If anything, this should strengthen FOSS, because one of stipulations is "document your foundational models and training sets"


Creating regulations for hypothetical scenarios and possibilities you can invent in your head is silly.

Give me real examples of IRL harm. And most importantly give me real examples of how exactly state intervention directly solves those problems.

Otherwise this is just a philosophical debate mixed with prepper type fear of what could happen.


> Give me real examples of IRL harm.

China's social scoring?

Racial profiling in government services? https://www.amnesty.org/en/latest/news/2021/10/xenophobic-ma...

Rejecting qualified applicants? https://www.eeoc.gov/newsroom/eeoc-sues-itutorgroup-age-disc...

Racial bias skipping patients in healthcare? https://www.scientificamerican.com/article/racial-bias-found...

Recruiting ignoring women? https://www.reuters.com/article/us-amazon-com-jobs-automatio...

Wrongly issued debts? https://en.wikipedia.org/wiki/Robodebt_scheme

and so on and so forth

> And most importantly give me real examples of how exactly state intervention directly solves those problems.

Like they solve about a billion others of problems daily, and you have no issues with government interventions.

> Otherwise this is just a philosophical debate mixed with prepper type fear of what could happen.

It's only philosophical if you willingly ignore the world around you


The anxiety for regulating absolutely everything in EU is starting to be harmful.

We see mountains and mountains of regulations, some of ehich are really harmful (especially in the primary sector) and that I think are not well-intentioned.

There is a full agenda of control-everything that I do not find healthy for the average citizen.


I gave an example right at the end of my comment (but might be I have edited it, so no fret intended). Self-driving cars.

Another commenter in this thread gave more examples, which further underline what I originally meant.

Physical harm means misclassified images/persons/posts/lifes/situations/... with the classification taken as gospel, in self-proclaimed good faith. Content moderation, credit scoring, police, the whole "new" generative space. — a lot of dangerous possibilities have opened.

All of them share the commonly accepted concept of "an a AI making a decision" (in common language).

This is another level of reliance, even if gradually, from computer systems and software in general.

I am not denying that complicated liability questions exist about that too.


I assume one of the core idea would be to protect individual made by “blackbox” (complex or non complex) algorithm (e.g. used by banks and other companies) without any direct human involvement.

Basically banning companies from just saying “ Computer says no”..


> That's the idea, yes.

Hmmm... if that's the case, I expect to see a lot of software taken off the EU market, whether by simply removing it from the app stores for Euro Zone or other means.

This is gonna be the new GPDR.


> This is gonna be the new GPDR

We should certainly hope so given how much that forced a long overdue reckoning for the data collection and sales markets.


Well, no. Outfits like Facebook and Amazon can afford to hire attorneys to keep them in compliance with GPDR (or, more accurately, skirt around it).

Small website operators can't do that. Accordingly, many have banned EU IP addresses altogether.

The basic phenomenon is called "regulatory capture" -- the large player counterintuitively does not object to onerous regulations, because their smaller competitors can't afford to comply with them.

Happens all the time, in many different sectors of the economy.

With regard to FOSS, I certainly wouldn't work on anything for free that was likely to get me enmeshed in the legal system of another country.

If this legislation is as described above, someone's probably working on a FOSS license right now that allows the software to be used everywhere except the EU.


> Well, no. Outfits like Facebook and Amazon can afford to hire attorneys to keep them in compliance with GPDR (or, more accurately, skirt around it). > Small website operators can't do that. Accordingly, many have banned EU IP addresses altogether.

It's much easier for small sites: not collecting data is free and if they aren't trying to resell it in ways which they don't want their customers to know about, it's easy enough to have a simple privacy policy. Those obtrusive banners are a political choice trying to make it look like GDPR compliance is onerous, but if you're not in the ad-tech business that's really not so hard since all of your cookies are essential.


> can afford to hire attorneys to keep them in compliance with GPDR

Compliance with GDPR is trivial for the absolute vast majority of businesses. Here's how GitHub does it: https://github.blog/2020-12-17-no-cookie-for-you/

--- start quote ---

At GitHub, we want to protect developer privacy, and we find cookie banners quite irritating, so we decided to look for a solution. After a brief search, we found one: just don’t use any non-essential cookies. Pretty simple, really.

So, we have removed all non-essential cookies from GitHub, and visiting our website does not send any information to third-party analytics services. (And of course GitHub still does not use any cookies to display ads, or track you across other sites.)

--- end quote ---


> Compliance with GDPR is trivial for the absolute vast majority of businesses.

Nonsense.

> Here's how GitHub does it

That's nice. GitHub is a billion dollar Microsoft subsidiary with a gigantic legal team on speed dial.

What does "non-essential" mean here? Personally, I'm not willing to gamble that my interpretation of "non-essential" matches the interpretation of some random EU bureaucrat.


> Nonsense

It's absolute truth

> That's nice. GitHub is a billion dollar Microsoft subsidiary with a gigantic legal team on speed dial.

And instead of third-party tracking, selling your data to the highest bidder, and pestering you with miles-long cookie popups with hundreds of "partners", they don't do any of that.

So, how is this impossible for any other business that doesn't have those lawyers?

> What does "non-essential" mean here?

The law has been around for 7 years now. If you still ask this question and assume that you need to spend billions on lawyers to do the easy sensible thing, it only means that you don't care about answers.


Complying with GDPR shouldn’t be that hard at all for most businesses if they introduce some basic procedures for handling/storing user data and most importantly don’t want to transfer/sell it to third parties


GDPR did really good things for the market as a whole. The number of (inevitably US) websites that made a big stink either didn't take the time to understand what they needed to do (perhaps an hour for an average website by a competent engineer), or are so incredibly allergic to the implication that they must act ethically that they decided to say "fuck this". For these cases, the world is better off if they don't exist.

If you build a system and cannot live with the fact that a human must make the final estimation and be responsible for it's decisions, you should not be building a system that judges people.


> The number of (inevitably US) websites that made a big stink either didn't take the time to understand what they needed to do (perhaps an hour for an average website by a competent engineer), or are so incredibly allergic to the implication that they must act ethically

Or just decided that complying with EU security theatre administered by an unaccountable bureaucracy wasn't worth an hour of their time and just banned the Euros outright.


Yes.


Aren't all current machine learning approaches statistical approaches using Bayesian estimation?


Not really.. they're often statistically based, but don't need to use Bayes' Theorem or be based on a Bayes Network. They are driven by expectation maximization and stochastic gradient descent, mainly. An autodifferentiation back-propagates the parameter update through a neural network model, but the network doesn't necessarily represent a Bayes Network.


If that were the case, that wouldn't be just one of the covered concepts in the proposed law. We should not assume world-shaking incompetence.


What do you mean by “current”?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: