Either directly (outlawing murder) or indirectly (providing for roads and bridges). And well (libraries) or poorly (modern copyright law).
But fundamentally, law benefits people.
Most modern economic perversions are a consequence of taking laws which benefit people (e.g. free speech) and overzealously applying them to non-people entities (e.g. corporations).
So "why [is it] ok for [a] human to learn from a prior art, but not for a LLM"?
Because a human has fundamental output limitations (parallel capacity, time, lifespan) and a machine does not.
Existing laws aren't the way they are because they encode universal truths -- they're instead the consensus reached between multiple competing interests and intrinsically rooted in the possible bounds of current reality.
"This is a fair copyright system" isn't constant with respect to varying supply and demand. It's linked directly to bounds on those quantities.
E.g. music distribution rights, when suddenly home network bandwidth increased enough to transfer large quantities of music files
Or, to put it another shorter way, the current system and source-blind model output fucks over artists.
> Because a human has fundamental output limitations (parallel capacity, time, lifespan) and a machine does not.
Industrialization as we know it would have never happened if we artificially limit progress, just so that people could still have jobs. I guess you could hold the same kind of argument for the copists, when printing became widespread; for horses before the automobile; or telephone operators before switches got automated. Guess what they have become now. Art made by humans can still exist although its output will be marginal compared to AI-generated art.
LLMs are not humans but are used by humans. In the end the beneficiary is still a human.
I'm making an argument that we need new laws, different than the current ones, which are predicated on current supply limitations and scarcity.
And that those new laws should redirect some profits from models to those whose work they were trained on during the temporary dislocation period.
And separately... that lobotomizing our human artistic talent pool is going to have the same effect that replacing our human journalism talent pool did. But that's a different topic.
For the AI/Robot tax, the pessimistic view is that the legal state of the world is such that such tax can and will be evaded. Now not only the LLMs put humans out of a job because an LLM or a SD model mimicks their work, but the financial gains have now been hidden away in tax havens through tax evasion schemes designed by AIs. And even if through some counter-AIs we manage to funnel the financial gains back to the people, what is now the incentive for capital owners to invest and keep investing in cutting-edge AI, if the profits are now so meagre to justify the investment?
Either directly (outlawing murder) or indirectly (providing for roads and bridges). And well (libraries) or poorly (modern copyright law).
But fundamentally, law benefits people.
Most modern economic perversions are a consequence of taking laws which benefit people (e.g. free speech) and overzealously applying them to non-people entities (e.g. corporations).
So "why [is it] ok for [a] human to learn from a prior art, but not for a LLM"?
Because a human has fundamental output limitations (parallel capacity, time, lifespan) and a machine does not.
Existing laws aren't the way they are because they encode universal truths -- they're instead the consensus reached between multiple competing interests and intrinsically rooted in the possible bounds of current reality.
"This is a fair copyright system" isn't constant with respect to varying supply and demand. It's linked directly to bounds on those quantities.
E.g. music distribution rights, when suddenly home network bandwidth increased enough to transfer large quantities of music files
Or, to put it another shorter way, the current system and source-blind model output fucks over artists.
And artists are humans. And LLMs are not.