Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Except it learned by direct examples and none of these models can solve even basic logic that isn’t in training data somehow.

Us having the ability doesn’t mean we’re close to reproducing ourselves, either



>Except it learned by direct examples

What do you mean by "it learned"? From my perspective evolution is nothing more than trail and error on a big timescale. In a way we are much more than evolution because we are a conscious intelligence that is controlling the trail and error resulting in shortening the timescale substantially.

Take robots that can walk for example. Instead of starting at a fish that slowly over thousands or even million of years moves to land and grows limbs over many generations we can just add legs which we have already tested in simulated software at x10 or more the speed of real time.

AI(G) potential or possibilities should not be measured by natures scale.


Have you tried giving it basic logic that isn't in its training data?

I have. gpt-3.5-instruct required a lot of prompting to keep it on track. Sonnet 4 got it in one.

Terrence Tao, the most prominent mathematician alive, says he's been getting LLM assistance with his research. I would need about a decade of training to be able to do any math in a day that Tao can't do in his head in less than 5 seconds.

LLMs are suffer from terrible, uh, dementia-like distraction, but they can definitely do logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: