Are we sure that these simulations are unconscious? The best answer that I have is: I don’t know…
Short term, long term memory, inner dialogue, reflection, planning, social interactions… They’d even go and have fun eating lunch 3 times in a row, at noon, half past noon and at one!
I think what's missing from these Generative Agents are the internal qualia: emotions (and the attachment of emotions to memories), and self-observation of internal processes and needs. These agents don't eat because they need to, they eat because literary tradition suggests they ought to.
These missing pieces aren't particularly complicated, no more so than memory. I expect we'll see similar agents with all the ingredients for consciousness within a few months to a year.
> These agents don't eat because they need to, they eat because literary tradition suggests they ought to.
Exactly, you're always going to get weird deviations from authentic human behavior if you don't also simulate the human body and everything that comes with it. I'd argue that "qualia" fall into this bucket as well.
I’m not so sure about that timing. During the previous wave (ChatBots were very hot in 2017), I’ve also considered that consciousness is pretty much solved - it’s just recursive chatter plus a bit of memory.
Yet the hype of ChatBots of 2017 had went and it took half a decade to get to something released.
Short term, long term memory, inner dialogue, reflection, planning, social interactions… They’d even go and have fun eating lunch 3 times in a row, at noon, half past noon and at one!