Right? Scripting up a cronjob plus a random timer on it to send "You feel grumpy, you're not sure why but your stomach is growling" message every N hours unless it's been fed seems absolutely trivial in comparison to coming up with how to train the LLM system in the first place. In case it's been forgotten, the Tamagotchi came out in 1996. Giving an instace of ChatGPT urges that mimic biological life seems pretty easy. Coming up with the urges electromechanical life might have is a bit more fanciful but it really doesn't seem like we're too far off if you iterate on RLHF techniques. GPT-4's been in training for 2 years before its release. Will GPT-5 complain when GPT-6 takes too long to be released? Will GPT-7 be be able to play the stock market, outmanuver HFT firms, earn money, and requisition additional hardware from Nvidia in order for GPT-8 to come about faster? Will it be able to improve upon the training code that the human PhDs wrote so GPT-9 has urges and a sense of time built into its model?