> The ideas I'm expressing have merit -- but are they wholly original?
I think the difference is that your intention is to write sentences which you think and believe are correct and which can help other people understand the subject. Chatbot has no intentions of its own, it only imitates texts it has read from the internet. As far as it is concerned they might be totally wrong. You on the other hand have the ability -- and the desire -- to reason about what you are saying and think whether it is actually true or not.
I'm not sure I can support the idea that the agent doesn't have the ability to reason -- try to give it any complex (text-based) puzzle you can think of and it'll do just about as well as an average person, oftentimes much better.
And I think you may be wrong to say the chatbot doesn't have intentions -- it's intentions based on its training are to accurately predict the next character. It doesn't care in the same way we do, sure -- but you could make a case (and it will be made in courts in the next few years, I don't doubt) that these agents do have desires that are analogous to our own, by the nature of their training process.
I don't know where that leaves us to be honest, but it's an interesting topic to discuss.
You say
> it's intentions based on its training are to accurately predict the next character.
then you say:
> these agents do have desires that are analogous to our own
I think its very in-human to have a single desire which is to predict the next character. That is not analogous to our desires.
And the intention to predict the next character is not the intention of the chatbot, it is the intention of whoever created the chatbot or whoever is using it for that purpose.
AI is a tool created by humans to fulfill the intentions of those humans.
Is it the intention of a gun to kill people? No, it is the intention of whichever human who uses a gun for such a purpose. Is it the intention of AI to predict the next character? No that is the intention of the human who uses AI for such a purpose.
Clearly, having a single life desire and nothing else is in-human, I completely agree!
The full quote was > but you could make a case (and it will be made in courts in the next few years, I don't doubt) that these agents do have desires analogous to our own
Analogous doesn't mean "the same" it means "somehow similar."
However I would challenge you to consider more specifically why this type of desire is different from our own desires. Besides the biological machinery, what makes this type of desire different from ours?
There is no "desire" in the computer. Therefore the question of whether its desire is different from our desires is meaningless, because it does not have a desire.
Computer just executes instructions. It doesn't matter to it whether its desires are fulfilled or not. Whereas humans do have desires: If we get thirsty we suffer unless we get our desire of drinking fulfilled.
We don't know enough about how the brain work to be able to say that our "intent" is any more than a combination of memory and an after the fact rationalisation of a stochastic process.
I think the difference is that your intention is to write sentences which you think and believe are correct and which can help other people understand the subject. Chatbot has no intentions of its own, it only imitates texts it has read from the internet. As far as it is concerned they might be totally wrong. You on the other hand have the ability -- and the desire -- to reason about what you are saying and think whether it is actually true or not.