> The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks.
You're proposing a law. How does it work?
Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.
But how is the government, or anyone, supposed to prove this? The reason you want it to be labeled is for the cases where you can't tell. If you could tell you wouldn't need it to be labeled, and anyone who wants to avoid labeling it could do so only in the cases where it's hard to prove, which are the only cases where it would be of any value.
> Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.
This is the most obvious problem, yes. Consumer protection agencies seem like the most obvious candidate. I have already admitted I am not a lawyer, but this really does not seem like an intractable problem to me.
> The reason you want it to be labeled is for the cases where you can't tell.
This is actually _not_ the most important use case, to me. This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
> But how is the government, or anyone, supposed to prove this?
Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.
> This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
But then why do you need any new laws at all? We already have laws against false advertising and breach of contract. If you want to declare that a space is exclusively human-generated content, what stops you from doing this under the existing laws?
> Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated. In order to prove it you would need some way of distinguishing machine-generated content, which if you had it would make the law irrelevant.
> This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.
Doing nothing can be better than doing either of two things that are both worse than nothing.
> But then why do you need any new laws at all? We already have laws against false advertising and breach of contract.
My preference would be for generative content to be disclosed as such. I am aware of no law that does this.
Why did we pass the FFDCA for disclosures of what's in our food? Because the natural path that competition would lead us down would require no such disclosure, so false advertising laws would provide no protection. We (politically) decided it was in the public interest for such things to be known.
It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.
> Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated.
You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.
> Doing nothing can be better than doing either of two things that are both worse than nothing.
Agreed. However, I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.
> My preference would be for generative content to be disclosed as such. I am aware of no law that does this.
What you asked for was a space without generative content. If you had a space where generative content is labeled but not restricted in any way (e.g. there are no tools to hide it) then it wouldn't be that. If the space itself does wish to restrict generative content then why can't you have that right now?
> Why did we pass the FFDCA for disclosures of what's in our food?
Because we know how to test it to see if the disclosures are accurate but those tests aren't cost effective for most consumers, so the label provides useful information and can be meaningfully enforced.
> It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.
This will happen regardless of disclosure unless it's prohibited, and even then people will just lie about it because there is an incentive to do so and it's hard to detect.
> You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.
It will be a technical battle between companies that don't want it on their service and try to detect it against spammers who want to spam. The effectiveness of a law would be directly related to what it would take for the government to prove that someone is violating it, but what are they going to use to do that at scale which the service itself can't?
> I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.
So you're proposing something which is useless but mostly harmless to satisfy demand for Something Must Be Done. That's fine, but I still wouldn't expect it to be very effective.
"Someone else will figure that out" isn't a valid response when the question is whether or not something is any good, because to know if it's any good you need to know what it actually does. Retreating into "nothing is ever perfect" is just an excuse for doing something worse instead of something better because no one can be bothered, and is how we get so many terrible laws.
you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.
One of the best descriptions I've seen on HN is this.
Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).
In truth, lots of things in the world aren't perfectly defined and the law deals with them just fine. One such example is the reasonable person standard.
> As a legal fiction,[3] the "reasonable person" is not an average person or a typical person, leading to great difficulties in applying the concept in some criminal cases, especially in regard to the partial defence of provocation.[7] The standard also holds that each person owes a duty to behave as a reasonable person would under the same or similar circumstances.[8][9] While the specific circumstances of each case will require varying kinds of conduct and degrees of care, the reasonable person standard undergoes no variation itself.[10][11] The "reasonable person" construct can be found applied in many areas of the law. The standard performs a crucial role in determining negligence in both criminal law—that is, criminal negligence—and tort law.
> The standard is also used in contract law,[12] to determine contractual intent, or (when there is a duty of care) whether there has been a breach of the standard of care. The intent of a party can be determined by examining the understanding of a reasonable person, after consideration is given to all relevant circumstances of the case including the negotiations, any practices the parties have established between themselves, usages and any subsequent conduct of the parties.[13]
> The standard does not exist independently of other circumstances within a case that could affect an individual's judgement.
Pay close attention to this piece
> or (when there is a duty of care) whether there has been a breach of the standard of care.
One could argue that because standard of care cannot ever be perfectly defined it cannot be regulated via law. One would be wrong, just as one would be wrong attempting to make that argument for why AI shouldn't be regulated.
> you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.
You are expressing a position which is both common and disingenuous.
> Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).
The government passes a law that applies a different rule to cars than trucks and then someone has to decide if the Chevrolet El Camino is a car or a truck. The inevitability of these distinctions is a weak excuse for being unable to answer basic questions about what you're proposing. The law is going to classify the vehicle as one thing or the other and if someone asks you the question you should be able to answer it just as a judge would be expected to answer it.
Which is a necessary incident to evaluating what a law does. If it's a car and vehicles classified as trucks have to pay a higher registration fee because they do more damage to the road, you have a way to skirt the intent of the law. If it's a truck and vehicles classified as trucks have to meet a more lax emissions standard, or having a medium-sized vehicle classified as a truck allows a manufacturer to sell more large trucks while keeping their average fuel economy below the regulatory threshold, you have a way to skirt the intent of the law.
Obviously this matters if you're trying to evaluate whether the law will be effective -- if there is an obvious means to skirt the intent of the law, it won't be. And so saying that the judge will figure it out is a fraud, because in actual fact the judge will have to do one thing or the other and what the judge does will determine whether the law is effective for a given purpose.
You can have all the "reasonable person" standards you want, but if you cannot answer what a "reasonable person" would do in a specific scenario under the law you propose, you are presumed to be punting because you know there is no "reasonable" answer.
Toll roads charge vehicles based upon the number of axles they have.
In other words, you made my point for me. The law is much better than you at doing this, they've literally been doing it for hundreds of years. It's not the impossible task you imagine it to be.
> You can have all the "reasonable person" standards you want, but if you cannot answer what a "reasonable person" would do in a specific scenario under the law you propose, you are presumed to be punting because you know there is no "reasonable" answer.
uhhh......
To quote:
> The reasonable person standard is by no means democratic in its scope; it is, contrary to popular conception, intentionally distinct from that of the "average person," who is not necessarily guaranteed to always be reasonable.
You should read up on this idea a bit before posting further, you've made assumptions that are not true.
> Toll roads charge vehicles based upon the number of axles they have.
So now you've proposed an entirely different kind of law because considering what happens in the application of the original one revealed an issue. Maybe doing this is actually beneficial.
> The law is much better than you at doing this, they've literally been doing it for hundreds of years. It's not the impossible task you imagine it to be.
Judges are not empowered to replace vehicle registration fees or CAFE standards with toll roads even if the original rules are problematic or fail to achieve their intended purpose. You have to go back to the legislature for that, who would have been better to choose differently to begin with, which is only possible if you think through the implications of what you're proposing, which is my point.
You're proposing a law. How does it work?
Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.
But how is the government, or anyone, supposed to prove this? The reason you want it to be labeled is for the cases where you can't tell. If you could tell you wouldn't need it to be labeled, and anyone who wants to avoid labeling it could do so only in the cases where it's hard to prove, which are the only cases where it would be of any value.