Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree, though would place the base probability that most self-explations are ChatGPT-like post-hoc reasoning without much insight into the actual cause for a particular decision. As someone below says, the split-brain experiments seem to suggest that our conscious mind is just reeling off bullshit on the fly. Like ChatGPT, it can approximate a correct sounding answer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: