Humans "hallucinate" in the AI way constantly, which is why I don't see them as a barrier to LLMs replacing humans in many contexts. It really isn't unusual for a human to make stuff up or be unaware of stuff.
This claim is ambiguous. The use of the word "Humans" here obscures rather than clarifies the issue. Individual humans typically do not "hallucinate" constantly, especially not on the job. Any individual human who is as bad at their job as an LLM should indeed be replaced, by a more competent individual human, not by an equally incompetent LLM. This was true long before LLMs were invented.
In the movie "Bill and Ted's Excellent Adventure," the titular characters attempt to write a history report by asking questions of random strangers in a convenience store parking lot. This of course is ridiculous and more a reflection of the extreme laziness of Bill and Ted than anything else. Today, the lazy Bill and Ted would ask ChatGPT instead. It's equally ridiculous to defend the wild inaccuracy and hallucinations of LLMs by comparing them to average humans. It's not the job of humans to answer random questions on any subject.
Human subject matter experts are not perfect, but they’re much better than average and don’t hallucinate on their subjects. They also have accountability and paper trails, can be individually discounted for gross misconduct, unlike LLMs.
A human being informed of a mistake will usually be able to resolve it and learn something in the process, whereas an LLM is more likely to spiral into nonsense
It's true that the big public-facing chatbots love to admit to mistakes.
It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.
Do you think the OpenAI human, when informed of their "oopsie" replied "You're right, there is existing evidence that this problem has already been solved. Blah Blah Blah ... and that's why our new model has made a huge breakthrough against previously unsolved math problems!"
Humans are a bit better at knowing which things are important and doing more research. Also better at being honest when directly pressed. And infinitely better at learning from errors.
(Yes, not everyone, but we do have some mechanisms to judge or encourage)
This is more and more clearly false. Humans get things wrong certainly, but the manner in which they get things wrong is just not comparable to how the LLMs get things wrong, beyond the most superficial comparison.
it's the same thing with self-driving, if you can make it safer than a good human driver it's enough. but the bar is pretty low with driving (as evidenced by the hundreds of thousands of collisions and deaths and permanent disabilities each year). and rather high in scientific publishing.
retail investors? no way. The fever-dream may continue for a while but eventually it will end. Meanwhile we don't even know our full exposure to AI. It's going to be ugly and beyond burying gold in my backyard I can't even figure out how to hedge against this monster.