It's not just the pretraining, it's the entire scaffolding between the user and the LLM itself that contributes to the illusion. How many people would continue assuming that these chatbots were conscious or intelligent if they had to build their own context manager, memory manager, system prompt, personality prompt, and interface?
The fact that pretraining of ChatGPT is done with a "completing the phrase" task has no bearing on how people actually end up using it.