> Effectively, they (LLMs) can't think new thoughts.
This is true only if you assume that combining existing thought patterns is not new thinking. If they can't learn a certain pattern from training data, indeed they would be stuck. However, their training data keeps growing and updating, allowing each updated version to learn more patterns.
This is true only if you assume that combining existing thought patterns is not new thinking. If they can't learn a certain pattern from training data, indeed they would be stuck. However, their training data keeps growing and updating, allowing each updated version to learn more patterns.