Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It will 100% have something in its training set discussing a human doing this and will almost definitely spit out something similar.


That's a good point but all it means is that we can't test the hypothesis one way or the other due to never being entirely certain that a given task isn't anywhere in the training data. Supposing that "AIs can't" is then just as invalid as supposing that "AIs can".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: