Hacker News new | past | comments | ask | show | jobs | submit login

> It never once considered backing up and verifying that there wasn’t some misunderstanding.

Of course not; ChatGPT doesn't "consider". It doesn't think, it doesn't know. It can't identify that there was a misunderstanding of its own volition.

All ChatGPT does is use a (very sophisticated!) statistical analysis to generate text that conforms to an expectation of what a human response to a similar prompt might look like. It has been trained well in so far as it is able to produce prompts that seem like a human may have written them, but it doesn't reveal cognitive processes like "reconsidering" because it doesn't have any.




Wow never heard this comment before


Comments of that nature will continue so long as there are people who don't understand how language models work (or choose to misrepresent them).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: