Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the which is heavier question, does it always pick the latter option?


I doubt it. Mitsuku, a purely rule-based chatbot, was already able to correctly answer almost all questions of this form in 2014 simply by querying a large knowledge base of common-sense facts.[1] On the neural net side, Google's seq2seq was able to answer questions like this around ~2016-2017, although I have no idea about the accuracy.

It would be more remarkable if GPT-3 couldn't solve these types of questions. It might be another problem with the prompt design.

[1] Incidentally, the article is wrong in claiming that the state of the art before modern neural nets was Eliza. Rule-based chatbots got quite advanced in 2013-2016, although they admittedly were never capable of the sort of "true" understanding and long-term coherence that GPT-3 seems to display.


Good observation. It is possible that GPT picked up on this statistical tendency. I participated in seven Turing tests, and most of the "X or Y" questions had the latter as the correct answer, so regularly that I set my program to pick it as default when it didn't know. As GPT picks up on statistical word orders, I find this likely to be the case here as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: