Hacker News new | past | comments | ask | show | jobs | submit login

> we have no way of knowing in advance what the capabilities of current AI systems will be if we are able to scale them by 10x, 100x, 1000x, and more

This doesn’t solve the unpredictability problem.




We don’t know. We didn’t predict that the rat brain would get us here. So we also can’t be confident in our prediction that scaling it won’t solve hallucination problems.


No, it doesn't "solve" the unpredictability problem.

But we haven't solved it for human beings either.

Human brains are unpredictable. Look around you.


> Human brains are unpredictable. Look around you.

As it was mentioned by others, we've had thousands of years to better understand how humans can fail. LLMs are black boxes and it never ceases to amaze me how they can fail in such unpredictable ways. Take the following for examples.

Here GPT-4o mini is asked to calculate 2+3+5

https://beta.gitsense.com/?chat=8707acda-e6d4-4f69-9c09-2cff...

It gets the answer correct, but if you ask it to verify its own answer

https://beta.gitsense.com/?chat=6d8af370-1ae6-4a36-961d-2902...

it says the response was wrong, and contradicts itself. Now if you ask it to compare all the responses

https://beta.gitsense.com/?chat=1c162c40-47ea-419d-af7a-a30a...

it correctly identifies that GPT-4o mini was incorrect.

It is this unpredictable nature that makes LLM insanely powerful and scary.

Note: The chat on the beta site doesn't work.


How are humans relevant here? As example, we operate at different speed.


Humankind has developed all sorts of systems and processes to cope with the unpredictability of human beings: legal systems, organizational structures, separate branches of government, courts of law, police and military forces, organized markets, double-entry bookkeeping, auditing, security systems, anti-malware software, etc.

While individual human beings do trust some of the other human beings they know, in the aggregate society doesn't seem to trust human beings to behave reliably.

It's possible, though I don't know for sure, that we're going to need systems and processes to cope with the unpredictability of AI systems.


Human performance, broadly speaking, is the benchmark being targeted by those training AI models. Humans are part of the conversation since that's the only kind of intelligence these folks can conceive of.


Are you expecting AIs to be more reliable, because they're slower?


You seem to believe that humans, on their own, are not stochastic and unpredictable. I contend that if this is your belief then you couldn't be more wrong.

Humans are EXTREMELY unpredictable. Humans only become slightly more predictable and producers of slightly more quality outputs with insane levels of bureaucracy and layers upon layers upon layers of humans to smooth it out.

To boot, the production of this mediocre code is very very very slow compared to LLMs. LLMs also have no feelings, egos, and are literally tunable and directible to produce better outcomes without hurting people in the process (again, something that is very difficult to avoid without the inclusion of, yep, more humans more layers, more protocol etc.)

Even with all of this mass of human grist, in my opinion, the output of purely human intellects is, on average, very bad. Very bad in terms of quality of output and very bad in terms of outcomes for the humans involved in this machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: