Sure but GPT certainly cannot actually do this. It can iterate, but in GPT-world that means "give me another response" not "learn from what i'm saying, and give me another response"
The issue is, sometimes those two things seem to be the same thing!
Eh? GPT definitely uses the past comments as context, so as you clarify, its responses get much closer to the target. If you haven't tried GPT-4, you absolutely should before having strong opinions about it.
Sorry, my question, sounds snippy now that I reread it. It sounds like you’re qualified to have strong opinions about it :-)
And that’s fair, you’re exciting the existing network, not training/changing its values fundamentally. But you can do a lot with that excitation, because it’s already got a lot to work with. And I don’t think this is very far off from how people work with new ideas in the immediate term - when they first hear about them, they think of them in terms of things they already understand.
It has been my experience that it’s generally capable of getting closer to the target, but maybe I just haven’t tried to push it past its capabilities.
I agree that GPT is very impressive, but just today for example I asked it how I can make a "virtual column" in Postgres, and it spun its gears.
I pasted the error I got from Postgres, and it said "oh, it's because you're using this function. here try this solution that doesn't use that function!"
And the solution... used that function!!
Was not even a complex ask. Concat of two other columns. I admit it was a good pointer in the right direction ("generated column" vs what I came up with "virtual column") and it was really impressive that it had a syntactically correct solution (though it didn't work)
But, if it can't do something as simple as not give me the function that doesn't work and that it understands doesn't work, that I told it it doesn't work, that the error says doesn't exist, etc.. in its new solution... after many prompts! (I exhausted my GPT-4 quota)....!
It's bad. It just isn't going to do much here past the initial "get me started" which, I do admit, is really impressive!
parent is only half wrong; given a sufficiently long iteration process any of the chatGPT versions would surely start losing coherency regarding far past requests -- this may be less evident when used within a well confined and (somewhat) easily self-referenced system (like Blender, for example) -- but it's especially evident when trying to prompt GPT to write a fiction story from basics or otherwise work entirely on its own without a place to store outputs and then refer back.
tl;dr: it's easier to tell chatgpt to "Rewrite this story: " and then feed back previous outputs when writing a story than it is to get to an acceptable output from massively detailed prompts or long chains of iteration; this trait has far-reaching consequence rather than just writing fiction.
I do understand , however, that 'long-term memory' is a very active point of discussion and development.
What I am saying is you can't get GPT to solve any problem you throw at it. What you can probably do is get it to give you a correct answer that it was trained on. Those are different things.
You can't teach it new information, and often that is required to solve a problem.
The issue is, sometimes those two things seem to be the same thing!