For clarity, ChatGPT has a short-term window of memory it’s able to not only process, but differentiate its own responses from user inputs. It’s also able to summarize and index its short-term window of memory to cover a longer window of dialogue. It also is able to recognize prior outputs by itself if the notation are not removed. Lastly, it’s able to respond to its own prior messages to say things like it was mistaken.
Compare this to humans, which for example if shown a fake photo of themselves on vacation, transcripts of prior statements, etc - do very poorly at identifying a prior reality. Same holds true for witness observation and related testimony.
I thought its "memory" was limited to the prior input in a session, essentially feeding in the previous input and output or a summarized form of it. It doesn't have a long term store that includes previous sessions or update its model as far as I know. Comparing that to long term memory is disingenuous, you'd have to compare it to short term memory during a single conversation.
The fact that human memory is not perfect doesn't matter as much as the fact that we are able to almost immediately integrate prior events into our understanding of the world. I don't think LLMs perform much better even when the information is right in front of them given the examples of garbled or completely hallucinated responses we've seen.
For clarity, humans only have “one session” so if you’re being fair, you would not compare it’s multi-session capabilities since humans aren’t able to have multiple sessions.
Phenomena related to integrating new information is commonly referred to as online vs offline learning, which is largely tied to time scale, since if you fast forward time enough, it becomes irrelevant. Exception being when time between observation of phenomena and interpretation of it requires a quicker response time relative to the phenomena or response times of others.Lastly, this is a known issue, one that is active area of research and likely to exceed human level response times in near future.
Also false that when presented with finite inline set of information that at scale humans comprehension exceeds state of the art LLMs.
Basically, only significant issues are those which AI will not be able to overcome, and as is, not aware of any significant issues with related proofs of such.
> For clarity, humans only have “one session” so if you’re being fair, you would not compare it’s multi-session capabilities since humans aren’t able to have multiple sessions.
Once again you're trying to fit a square peg in a round hole. If we're talking about short term or working memory then humans certainly have multiple "sessions" since the information is not usually held on to. It's my understanding that these models also have a limit to the number of tokens that can be present in both the prompt and response. Sounds a lot more like working memory than human like learning. You seem fairly well convinced that these models are identical or superior to what the human brain is doing. If that's true I'd like to see the rationale behind it.
No, humans, unless you are referring to procreation, do not have multiple sessions, they have a single session, once it ends, they’re dead forever. One ChatGPT session memory is obviously superior to any human that’s ever lived; if you’re not familiar with methods for doing for, ask ChatGPT how to expand information retention beyond the core session token set. Besides, there already solutions that solve long-term memory for ChatGPT across sessions by simply storing and reinitializing prior information into new sessions. Lastly, you not I are the one refusing to provide any rationale, since I already stated I am not aware of any significant insurmountable issue that will either be resolved or for that matter exceeded by AI.
Compare this to humans, which for example if shown a fake photo of themselves on vacation, transcripts of prior statements, etc - do very poorly at identifying a prior reality. Same holds true for witness observation and related testimony.