I'm as impressed as anyone with GPT-3 samples, but you're sort of ignoring the symbol grounding elephant in the room regarding language models (https://openreview.net/pdf?id=GKTvAcb12b).
Language models are not grounded learners. The language produced does not really correspond meaningfully to our world except in superficial (albeit complex) ways.
Do you have thoughts on how to move forward on this problem? Maybe ask GPT-3 and see what it thinks :P
The problem, if I understand correctly, is that we're feeding enormous amounts of text to language models hoping that they might contain, hidden in their patterns, enough information about the real world to allow prodigiously complex NNs to extract it and create their own representation of reality.
And while this is possible, it feels there should be more effective ways to impart a knowledge of reality- if only we had huge databases of usable data to feed to these NNs instead of dumps of text. At the moment it feels like we're trying to teach advanced physics to a subject with no previous knowledge of physics or math by just feeding it with everything on arXiv and physics textbooks in random order. What you get is someone who can produce text that mimics the superficial style of scientific articles, but with an extremely confused understanding of the subject, if any at all.
I am happy to take them at their word that their theory about symbol grounding proves that no LM will ever be able to solve "Three plus five equals" (appendix B); and thus, by modus tollens, GPT-3's ability to (already) solve "Three plus five equals" means their theory is wrong and I need not consider it any further.
Symbol grounding is as much a problem in AI as whether or not our use of language is meaningful. Does our language encode particular models of the world? Yes? Good. Then AI models also encode models of the world.
Language models are not grounded learners. The language produced does not really correspond meaningfully to our world except in superficial (albeit complex) ways.
Do you have thoughts on how to move forward on this problem? Maybe ask GPT-3 and see what it thinks :P