Sure. But my point was that it is not an inherent feature in LLMs that they are frozen in time.
Fine-tuning the entire model is very expensive. But fine-tuning a tiny parallell piece using LoRA is cheap both in CPU cycles and storage.
OpenAI could already have implemented an auto-update feature without telling us.
In the future, I can see them selling a premium feature where you have your own LoRA-addon that gets constantly trained on your interactions with it, so you get your own personalized GPT-4.