We don’t know. We didn’t predict that the rat brain would get us here. So we also can’t be confident in our prediction that scaling it won’t solve hallucination problems.
> Human brains are unpredictable. Look around you.
As it was mentioned by others, we've had thousands of years to better understand how humans can fail. LLMs are black boxes and it never ceases to amaze me how they can fail in such unpredictable ways. Take the following for examples.
Humankind has developed all sorts of systems and processes to cope with the unpredictability of human beings: legal systems, organizational structures, separate branches of government, courts of law, police and military forces, organized markets, double-entry bookkeeping, auditing, security systems, anti-malware software, etc.
While individual human beings do trust some of the other human beings they know, in the aggregate society doesn't seem to trust human beings to behave reliably.
It's possible, though I don't know for sure, that we're going to need systems and processes to cope with the unpredictability of AI systems.
Human performance, broadly speaking, is the benchmark being targeted by those training AI models. Humans are part of the conversation since that's the only kind of intelligence these folks can conceive of.
You seem to believe that humans, on their own, are not stochastic and unpredictable. I contend that if this is your belief then you couldn't be more wrong.
Humans are EXTREMELY unpredictable. Humans only become slightly more predictable and producers of slightly more quality outputs with insane levels of bureaucracy and layers upon layers upon layers of humans to smooth it out.
To boot, the production of this mediocre code is very very very slow compared to LLMs. LLMs also have no feelings, egos, and are literally tunable and directible to produce better outcomes without hurting people in the process (again, something that is very difficult to avoid without the inclusion of, yep, more humans more layers, more protocol etc.)
Even with all of this mass of human grist, in my opinion, the output of purely human intellects is, on average, very bad. Very bad in terms of quality of output and very bad in terms of outcomes for the humans involved in this machine.
This doesn’t solve the unpredictability problem.