It's the other way around. Arguing from similarity to humans is a counter-argument to the idea that a process similar enough to a human practitioner[0] should be covered by IP laws as if it was regular piece of software - IP laws that are mostly archaic and already stretched to near-breaking point to cover computers in the first place. It's not about whether or not humans have inherent rights. It's about logical consistency.
This is perhaps a "failing"[1] typical to STEM/tech people: expecting laws to be consistent. Consistent in a way similar to mathematics, in that for any ruling you can trace the arguments back to underlying interpretations of laws, and trace those down to written rules, and trace those to some more fundamental rules, and eventually to some sort of intuitions about morality and fairness. And at least to me, arguments that generative network training ought to be treated like compiling and obfuscating a regular dataset of copyrighted works, is justified merely by "it's different because computers".
This is not to say that current models are learning and creating art and text the same way humans are, but rather that the process just seems to be close enough. And the point of mentioning logical consistency is this: I may agree with copyrighting the living hell out of LLMs for pragmatic reasons[2] - because it shakes the boat too much, has potential to destroy livelihoods of too many people in too short a time, while further centralizing power, etc. Those are all valid arguments. "Because it's a computer", to me, isn't. Not when the process and effects are already eerily similar to how humans work. Not when such argument would apply just as much in a hypothetical future where we develop sentient AIs[3].
--
[0] - In very limited scope, but also the very scope subject to legal issues.
[1] - I don't really consider it a failing. There's both beauty and efficiency in things making some kind of sense.
[2] - Intellectual property laws themselves are mostly pragmatic in this sense anyway.
[3] - Even in this future, we'll have to face pragmatic issues. I read an interesting take on this long ago, I think in one of Eliezer's essays: how do you handle democracy, fair resource allocation, basic ethics, in the presence of AI people that exist in silica? Such AIs will likely be able to reproduce much faster than humans - bringing new individuals into existence at the speed of factories pumping out GPUs. Equality and democracy are all fine until suddenly there's 10 trillion of AI people, and only 10 billion of human people, and everyone gets an equal vote. How do we deal with that?
> the process (of human vs AI learning) just seems to be close enough
It only seems that way because the people who understand it best also stand to profit from it, and are choosing to misrepresent it so that people who don't understand it falsely believe it is at all close.
That so many people on here are pushing the false narrative that they are at all similar is sad, since this is ostensibly a forum for people with technical knowledge. Nothing about machine learning is similar to human learning, by any measure of evidence. If I am wrong, show me that evidence.
"Data go in, something happen, metadata come out" is the closest you can approximate the 2, and too many supposedly tech-savvy people seem content to treat that as "close enough".
> "Because it's a computer", to me, isn't. Not when the process and effects are already eerily similar to how humans work. Not when such argument would apply just as much in a hypothetical future where we develop sentient AIs.
The argument is not "because it's a computer", it's "because it's not sentient", as I mentioned in my comment. Animals also have rights, because they are living.
ML models are none of those (living, intelligent, or sentient), and the popular conflation of ML models and "AI" with "AGI"/ the kind of AI that may or may not ever end up existing, is just a convenient excuse by companies looking to maximize profit and minimize licensing costs, not to regulate the plain old python that we're actually talking about now.
This is perhaps a "failing"[1] typical to STEM/tech people: expecting laws to be consistent. Consistent in a way similar to mathematics, in that for any ruling you can trace the arguments back to underlying interpretations of laws, and trace those down to written rules, and trace those to some more fundamental rules, and eventually to some sort of intuitions about morality and fairness. And at least to me, arguments that generative network training ought to be treated like compiling and obfuscating a regular dataset of copyrighted works, is justified merely by "it's different because computers".
This is not to say that current models are learning and creating art and text the same way humans are, but rather that the process just seems to be close enough. And the point of mentioning logical consistency is this: I may agree with copyrighting the living hell out of LLMs for pragmatic reasons[2] - because it shakes the boat too much, has potential to destroy livelihoods of too many people in too short a time, while further centralizing power, etc. Those are all valid arguments. "Because it's a computer", to me, isn't. Not when the process and effects are already eerily similar to how humans work. Not when such argument would apply just as much in a hypothetical future where we develop sentient AIs[3].
--
[0] - In very limited scope, but also the very scope subject to legal issues.
[1] - I don't really consider it a failing. There's both beauty and efficiency in things making some kind of sense.
[2] - Intellectual property laws themselves are mostly pragmatic in this sense anyway.
[3] - Even in this future, we'll have to face pragmatic issues. I read an interesting take on this long ago, I think in one of Eliezer's essays: how do you handle democracy, fair resource allocation, basic ethics, in the presence of AI people that exist in silica? Such AIs will likely be able to reproduce much faster than humans - bringing new individuals into existence at the speed of factories pumping out GPUs. Equality and democracy are all fine until suddenly there's 10 trillion of AI people, and only 10 billion of human people, and everyone gets an equal vote. How do we deal with that?