Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the base pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, through our current post-training process, the calibration is reduced.”

Interesting that the post-training has that effect.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: