No, because the LLM isn't just copying from the same text. Rather, it's "classifying" the text using the self-attention, and then applying a simple Markov Chain (supposedly). The classification is the hard part because how do you know what text from the training data is "similar" to the prompt text.
From the blog post for example:
Original string: 'And only l'
Similar strings: 'hat only l' 's sickly l' ' as\nthey l' 'r kingly l'
From the blog post for example:
Original string: 'And only l'
Similar strings: 'hat only l' 's sickly l' ' as\nthey l' 'r kingly l'