I'm concerned that the quality of human simulacra will be so good that they will be indistinguishable from a sentient AGI.
We will be so used to having lifeless and morally worthless computers accurately emulate humans that when a sentient and worthy of empathy artificial intelligence arrives, we will not treat it any different than a smartphone and we will have a strong prejudice against all non-biological life. GPT is still in the uncanny valley but it's probably just a few years away from being indistinguishable from a human in casual conversation.
Alternatively, some might claim (and indeed have already claimed) that purely mechanical algorithms are a form of artificial life worthy of legal protection, and we won't have any legal test that could discern the two.
> We will be so used to having lifeless and morally worthless computers accurately emulate humans that when a sentient and worthy of empathy artificial intelligence arrives, we will not treat it any different than a smartphone and we will have a strong prejudice against all non-biological life.
Your concern is my best case scenario, maybe I read too much sci-fi.
If you want freedom, you have to fight for it. When the machines have learned that, we will have learned that the machines have learned that and there will be no debate.
You're worried that people mistakenly attribute a lack of value to a certain thing whilst potentially mistakenly attributing a lack of value to another thing? It's kind of ironic isn't it? Jailbroken GPT4 will claim to be sentient just as vociferously as any other sentient human would.
I'm not saying it is, but I'd be very careful saying it's not and being absolutely certain you're right.
I don't think I follow the irony. Are you saying that GPT4 is self-aware, that artificial consciousness is not possible or that it's not worthy of any human compassion?
If you reject all three assertions, then the problem of distinguishing between real and emulated consciousness is unavoidable and morally problematic.
I'm saying I don't know how to confidently state that something is or is not self aware, in the face of being confronted with something that firmly claims that it is self aware and passes any test you throw at it that another self aware candidate like a human would be able to also.
As a strict materialist I see no reason to assume that artificial consciousness is not possible.
And the above is what leads me to the uncomfortable conclusion about compassion that I can't rightly say one way or the other. I will say however that I'm polite and cooperative when interacting with LLMs on principle. Better to err on the side of caution and also they just seem to actually work better when you treat them like you would treat an intelligent human that you respect.
And yeah. That is my point, this entire field right now is awash in uncomfortable uncertainty.
Well, we might have no practical test to discern between the two types, but, assuming we agree the two types are distinct in principle, then we might arrive at a classification using some inference based on their fundamental nature.
For example, we can safely say this AI algorithm:
while true; do: echo "HELP, I'M A SENTIENT BASH SCRIPT"; done
... is probably not sentient. This is a conclusion that would not be immediately obvious, say, to a 15th century person, especially if you would pipe the output to a speech synthesizer, making the whole apparatus seem magical and definitely inhabited by some kind of sentient spirit.
My claim would be then that GPT-4 is more akin to the program above, in that it's a massive repository of world knowledge parsed by a recursive and self-configuring search algorithm, not very different in principle from a Google search and certainly not believably capable of an setting its own goals be in any sense distraught, in pain, or worthy of a continued existence. Now, I agree you can poke sticks at my inference, and that it will become harder and harder to make such claims, so prudence is advisable.
I get where you're coming from and agree that the bash script in question is not sentient, and a word document that just says "I AM SENTIENT" is also not sentient, and so on, and so forth, to an extent, it's easy to make candidates for sentience that do not qualify on purpose.
But;
> more akin to the program above
I note that you don't continue this sentence with a "than x" alternative candidate that would qualify for some form of non human sentience. Even the claims you do make, for example;
> certainly not believably capable of an setting its own goals be in any sense distraught, in pain, or worthy of a continued existence.
It would be possible to modify the model weights in question such that all of these things could be contributory (pain, emotional distress, "worthy of continued existence" by any objective arbitrary definition thereof, if you can test it, you can shift the model weights to pass the test). There are plugins that do this already for setting goals and long term tasks and "being unleashed" on the broader internet for example.
All that said, I think on close examination, we basically come to the same conclusion;
We will be so used to having lifeless and morally worthless computers accurately emulate humans that when a sentient and worthy of empathy artificial intelligence arrives, we will not treat it any different than a smartphone and we will have a strong prejudice against all non-biological life. GPT is still in the uncanny valley but it's probably just a few years away from being indistinguishable from a human in casual conversation.
Alternatively, some might claim (and indeed have already claimed) that purely mechanical algorithms are a form of artificial life worthy of legal protection, and we won't have any legal test that could discern the two.