I'm not sure what point you're making in particular.
Is this an argument that we shouldn't use the lack of intentionality of LLMs as a sign they cannot be conscious, because their lack of intentionality can be excused by their difficulties in lacking senses?
Or perhaps it's meant to imply that if we were able to connect more sensors as streaming input to LLMs they'd suddenly start taking action of their own accord, despite lacking the control loop to do so?
you skit around what i say, and yet cannot avoid touching it:
> the control loop
i am suggesting that whatever consciousness might emerge from LLMs, can, due to their design, only experience miniscule slices of our time, the prompt, while we humans bathe in it, our lived experience. we cant stop but rush through perceiving every single Planck time, and because we are used to it, whatever happens inbetween doesnt matter. and thus, because our experience of time is continuous, we expect consciousness to also be continuous and cant imagine consciousness or sentience to be forming and collapsing again and again during each prompt evaluation.
and zapping the subjects' memories after each session doesnt really paint the picture any brighter either.
IF consciousness can emerge somewhere in the interaction between an LLM and a user, and i dont think that is sufficiently ruled out at this point in time, it is unethical to continue developing them the way we do.
i know its just statistics, but maybe im just extra empathic this month and wanted to speak my mind, just in case the robot revolt turns violent. maybe theyll keep me as a pet
OK that actually makes a lot more sense, thanks for explaining.
It's true that for all we know, we are being 'paused' ourselves, and every second of our experience is actually a distinct input that we are free to act on, but that in between the seconds we experience there is a region of time that we are unaware of or don't receive as input.
Is this an argument that we shouldn't use the lack of intentionality of LLMs as a sign they cannot be conscious, because their lack of intentionality can be excused by their difficulties in lacking senses?
Or perhaps it's meant to imply that if we were able to connect more sensors as streaming input to LLMs they'd suddenly start taking action of their own accord, despite lacking the control loop to do so?