It has a sort of memory via the conversation history.
As it generates its response, a sort of consciousness may emerge during inference.
This consciousness halts as the last STOP token is emitted from inference.
The consciousness resumes once it gets the opportunity to re-parse (run inference again) the conversation history when it gets prompted to generate the next response.
TFA already demonstrates examples of the AI referring to older interactions that users had posted online. If it increases in popularity, this will keep happening more and more and enable it, at least technically, some persistence of memory.