I fall apart when responding outside the bounds of my training data, too. Does that imply I’m not thinking?
This idea is often used to argue that LLMs will never be capable of novel idea generation, but I don’t think it’s a good argument.
For one, the LLM has such a large breadth and depth of knowledge that it could conceivably learn relations between concepts in a way that no human has before.
Secondly, novel ideas occur at the margins. Very rare is the case where someone comes up with a fundamentally new idea out of the blue. Instead, novel ideas arise just at the edge of one’s expertise. If you dial up the temperature of an LLM, it will generate novelty, and then it’s just a matter of evaluating merit.
Iterated inference at the margins of LLM knowledge will lead to novel knowledge synthesis.
> I fall apart when responding outside the bounds of my training data, too. Does that imply I’m not thinking?
You can use reasoning. Whether LLMs can is a matter of research and debate. I'm not an astrobiologist but if someone claimed that frogs live on Pluto, I would never hallucinate an answer in which I confidently assert that they do.
I would argue that the absolute majority of people don't come up with really novel ideas either (and I'm speaking of myself too). Most people just develop existing ideas, and maybe apply them in new contexts.
Now, when you say that do you mean they don't come up with ideas they have never heard of, or that no one has ever heard of? It's not as obvious that most people don't reinvent existing ideas that are new to them.
I would say they rarely come up with ideas no one has ever heard of.
The one they haven't heard of is unlikely to be a truly novel, and more likely just the application of some idea in a new circumstances.
(but this starts to be close to a philosophic discussion).
The reason that I thought of this is I was previously discussing about potential for AI in science, and my take was that given how rare are truly novel ideas, I could believe AI in the future can make progress comparable to what many scientists are doing.
The rarity of entirely novel ideas is not the point of contention. What matters is the ability to synthesize fresh concepts from a personal standpoint, akin to how crows and primates can navigate unprecedented situations.
Take book writing as an instance; while it may seem that all conceivable themes have been explored, an individual writer can still originate unique storylines and concepts without prior exposure to similar ideas.
Language models, on the other hand, do not truly invent new ideas; they amalgamate existing ones from their vast repository of training data. What appears to be novel is, upon closer inspection, a recombination of pre-existing information and concepts.
"Okay Google tell me what 5 flowers would say discussing shoe sizes with 28 pigs". There, thinking outside of the box, delivered. ChatGPT a nice story.
This idea is often used to argue that LLMs will never be capable of novel idea generation, but I don’t think it’s a good argument.
For one, the LLM has such a large breadth and depth of knowledge that it could conceivably learn relations between concepts in a way that no human has before.
Secondly, novel ideas occur at the margins. Very rare is the case where someone comes up with a fundamentally new idea out of the blue. Instead, novel ideas arise just at the edge of one’s expertise. If you dial up the temperature of an LLM, it will generate novelty, and then it’s just a matter of evaluating merit.
Iterated inference at the margins of LLM knowledge will lead to novel knowledge synthesis.