gpt is clearly capable of constructing and interacting with a model of the world -- there was a famous example in the microsoft bing paper where they described a complex maze and had it reason its way out. 'predicting what it thinks is most similar to stackoverflow' etc does not explain any of its behavior in a meaningful way, unless you are willing to apply this description to human intellectual labor. gpt4 is clearly not sentient, but what is shocking about it is that it demonstrates that honest-to-god cognition can be decoupled from sentience.
btw, i think i have tended to see these pessimistic takes by people who have only used 3.5, which is impressive but not reliable -- i strongly recommend using 4 in the course of your job to see how useful it is. every day in the course of interacting with it, it produces novel lines of thought which shock me.
btw, i think i have tended to see these pessimistic takes by people who have only used 3.5, which is impressive but not reliable -- i strongly recommend using 4 in the course of your job to see how useful it is. every day in the course of interacting with it, it produces novel lines of thought which shock me.