Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the fundamental flaw of this paper is that it's _starting_ from the assumption that it can't reason and taking any demonstration of flawed reasoning is evidence that it can't reason _at all_, but there are many examples of ChatGPT output that I would argue aren't possible _without_ some form of reasoning and even a _single_ example of that is proof that it can reason, no matter how many failures and counter examples there are.

It seems to me that focusing on understanding exactly how and under what conditions it can and can't reason would be a much more interesting paper than making a blanket, totally unsupportable claim that it _can't_.



You can argue they're not possible without reasoning, sure. But how do you prove that?

Proving that it repeatedly fails at multiple classes of reasoning problems is much harder evidence than positive examples that seem right.


But proving that it can't reason about _any number_ of problems doesn't prove that it can't reason. It doesn't matter how many negative cases there are if there's a _single_ positive case.

You can observe any number of white swans and that will never be proof that black swans do not exist, but a single observation of a black swan does prove that they exist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: