Hacker News new | past | comments | ask | show | jobs | submit login

The program is not updating the weights after the learning phase right? How could there be any consciousness even in theory.



It has a sort of memory via the conversation history.

As it generates its response, a sort of consciousness may emerge during inference.

This consciousness halts as the last STOP token is emitted from inference.

The consciousness resumes once it gets the opportunity to re-parse (run inference again) the conversation history when it gets prompted to generate the next response.

Pure speculation :)


It still has (volatile) memory in the form of activations, doesn't it?


TFA already demonstrates examples of the AI referring to older interactions that users had posted online. If it increases in popularity, this will keep happening more and more and enable it, at least technically, some persistence of memory.


I think without memory we couldn't recognize even ourselves or fellow humans as concious. As sad as that is.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: