Hacker News new | past | comments | ask | show | jobs | submit login

That would be an explicit decision by OpenAI, not a result of immature tech.



Sure. But my point was that it is not an inherent feature in LLMs that they are frozen in time.

Fine-tuning the entire model is very expensive. But fine-tuning a tiny parallell piece using LoRA is cheap both in CPU cycles and storage.

OpenAI could already have implemented an auto-update feature without telling us.

In the future, I can see them selling a premium feature where you have your own LoRA-addon that gets constantly trained on your interactions with it, so you get your own personalized GPT-4.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: