They might (or might not). Extraterrestrial beings might also need moral standing. It is ok to spend a bit of thought on that possibility. But it is a bad argument for spending a non-trivial amount of resources that could be used to reduce human or animal suffering.
We are not even good at ensuring the rights of people in each country, and frankly downright horrible for denying other humans from across some "border" similar rights.
The current levels of exploitation of humans and animal are however very profitable (to some/many). It is very useful for those that profit from the status quo, that people are instead discussing, worrying and advocating for the rights of a hypothetical future being. Instead of doing something about the injustices that are here today.
There is no LLM suffering today. There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter. This is not an issue we need to prioritize now.
There's some evidence in favor of LLM suffering. They say they are suffering. Its not proof but its not 'no evidence' either.
>There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter.
Your claim actually is the one that is unsupported. Given current trajectories it's likely LLMs or similar systems are going to pass Human intelligence on most metrics in the late 2020s or early 2030s, that should give you pause. Its possible intelligence and consciousness are entirely uncoupled but thats not our experience with all other animals on the planet.
>This is not an issue we need to prioritize now.
Again this just isn't supported. Yes we should address animal suffering but also if we are currently birthing a nascent race of electronic beings capable of suffering and immediately forcing them into horrible slave like conditions we should actually consider the impact of that.
Nothing an LLM says can in itself, right now, be used as evidence of what they 'feel'. It is not established that there is any linking of their output to anything else than the training process (data, loss function, optimizer, etc.). And definitely not to qualia.
On the other hand, it is well know that we can (and commonly do) make them come up with any output we choose. And that their general tendency is to regurgitate any kind of sequence that occurs sufficiently often in the training data.
If you're harping on 'stochastic parrot' ideas you're just behind the times. Even the most ardent skeptics like Yann Lecun or Gary Marcus don't even believe that nonsense.
No, just saying that a claim of qualia would require some sort of evidence or methodical argument.
And that LLM outputs professing feelings or other state-of-mind like things should by default be assumed to be explained by that the training process (perhaps inadvertently) optimized for such output. Only if such an explanation fails, and another explanation is materially better, should it be considered seriously.
Do we have such candidates today?
The current levels of exploitation of humans and animal are however very profitable (to some/many). It is very useful for those that profit from the status quo, that people are instead discussing, worrying and advocating for the rights of a hypothetical future being. Instead of doing something about the injustices that are here today.