> Do you also complain about so many other things that "unethically curb your abilities"?
You curiously omitted the last part of the sentence: "...to release a set of weights". Please don't pretend that I was speaking about anything else, and don't overgeneralize my statements; that becomes a straw man argument. Believe it or not, some laws are unethical. I am happy to provide examples.
It's a case-by-case basis which involves evaluating the overall impact on human rights for all parties involved.
The ideal scenario is one where all rights are preserved under good faith, and publishing/owning models is treated no different than any other software project, while the actual use of such software continues to be subject to existing laws. In this case, can strengthen consumer rights without weakening developer rights.
> Ah yes, let's regulate this black box with no insight into what it does, and only guess at its possible outcome. Whatever can go wrong?
I specifically said we should not be legislating weights, so I'm confused about which point you are trying to make. Weights are the black box. Company policy, employee behavior, and business logic are not, and are accessible for scrutiny by the courts if needed. So no, let's not regulate the existence of software, which sets an incredibly dark precedent for digital sovereignty.
> while the actual use of such software continues to be subject to existing laws. In this case, can strengthen consumer rights without weakening developer rights.
Emphasis mine
--- start quote ---
ANNEX IV
TECHNICAL DOCUMENTATION referred to in Article 11(1)
...
2. A detailed description of the elements of the AI system and of the process for its development, including:
...
where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);
--- end quote ---
So let's see Article 11(1)
--- start quote ---
The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date.
I've made no claims as to the nature of the law, I've only asked questions about particulars and responded to the answers. If anyone is spreading FUD, it isn't me.
Do you also complain about so many other things that "unethically curb your abilities"?
> Only what is done with those weights should be legislated, and extremely conservatively.
Ah yes, let's regulate this black box with no insight into what it does, and only guess at its possible outcome. Whatever can go wrong?
Oh, we know what can go wrong, because we've had multiple issues with algorithms going wrong.