Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We do know that our brain changes, and we equally know that ChatGPT does not.

If it were able to modify it’s own model, and permanently execute on itself I’d be a lot more worried.



GPT-4 has 32k tokens of context. I'm sure someone out there is implementing the pipework for it to use some as a scratchpad under its own control, in addition to its input.

In the biological metaphor, that would be individual memory, in addition to the species level evolution through fine-tuning


Yeah, I’m doing that to get GPT-3.5 to remember historical events from other conversations. It never occurred to me to let it write it’s own memory, but that’s a pretty interesting idea.


Chat gpt changes when we train or fine-tune it. It also has access to local context within a conversation, and those conversations can be fed back as more training data. This is similar to a hard divide between short term and long term learning.


We cannot do that today, but how many days is that away? Is it measured in the hundreds, the thousands, or more?

I feel we're uncomfortably close to the days of the self modifing learning models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: