Look, you're all over this thread misunderstanding LLMs and rejecting the relatively correct explanations people are giving you. The comment by joe_the_user upthread that you called an oversimplification was in fact a perfect description (randomly sampling from a space of appropriate inputs). That's exactly the intuition you should have.
Do you know the Wason test? The point is that people do not intuitively know how to correctly pick which experiments to do to falsify an assumption. My point is that you are not picking the right experiments to falsify your assumptions, instead you're confirming what you think is going on. You're exactly failing the Wason task here.
Really want to understand language models? Go build a few from scratch.
Don't have time for that? Read Wolfram's post or any of the other similar good recent breakdowns.
Only interested in understanding by playing with it? Great! An experimentalist in the true scientific tradition. Then you're going to have to do good experimental science. Don't be fooled by examples that confirm what you already think is going on! Try to understand how what people are telling you is different from that, and devise experiments to distinguish the two hypotheses.
If you think ChatGPT "understands" word problems, figure out what "understanding" means to you. Now try your best to falsify your hypothesis! Look for things that ChatGPT can't do, that it should be able to do if it really "understood" by your definition (whatever you decide that is). These are not hard to find (for most values of "understand"). Finding those failures is your task, that's how you do science. That's how you'll learn the difference between reality and what you're reading into it.
Look, you're all over this thread misunderstanding LLMs and rejecting the relatively correct explanations people are giving you. The comment by joe_the_user upthread that you called an oversimplification was in fact a perfect description (randomly sampling from a space of appropriate inputs). That's exactly the intuition you should have.
Do you know the Wason test? The point is that people do not intuitively know how to correctly pick which experiments to do to falsify an assumption. My point is that you are not picking the right experiments to falsify your assumptions, instead you're confirming what you think is going on. You're exactly failing the Wason task here.
Really want to understand language models? Go build a few from scratch.
Don't have time for that? Read Wolfram's post or any of the other similar good recent breakdowns.
Only interested in understanding by playing with it? Great! An experimentalist in the true scientific tradition. Then you're going to have to do good experimental science. Don't be fooled by examples that confirm what you already think is going on! Try to understand how what people are telling you is different from that, and devise experiments to distinguish the two hypotheses.
If you think ChatGPT "understands" word problems, figure out what "understanding" means to you. Now try your best to falsify your hypothesis! Look for things that ChatGPT can't do, that it should be able to do if it really "understood" by your definition (whatever you decide that is). These are not hard to find (for most values of "understand"). Finding those failures is your task, that's how you do science. That's how you'll learn the difference between reality and what you're reading into it.