Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm all ears!



OpenAI read the paper and changed the model?


Quick work, if they did so since the preprint was posted six days ago, of which two were a weekend! My version of ChatGPT claims to be the 3rd August version, which gave them one day to respond unless they were somehow targeting some sneak peek pre-preprint.


Don't know how much time they need to tweak their model but here is another possibility.

OoenAI sells GPT 4 but it's only GPT 3.5 because of lack of resources.

Or more sinister, they knew what the author was about to test and gave him the inferior model so it could be easily debunked.


27th July was the first version of the paper.

https://www.preprints.org/manuscript/202308.0148/v2


A whole four working days to adjust the model in between preprint release and the version of ChatGPT I'm using, then! Do you think that's plausible? I certainly don't.


Or simply the model was improved between the author's test and the release of the paper.

BTW the time stamp of the model is easily falsifiable.

We are talking about a billion dollar business opportunity so expect foul play all along.


Yeah man they have teams on standby to adjust the model whenever a random unknown author posts something on obscure pre-print servers. Then they spend hundreds of thousands of compute $ to improve the model on that one metric the paper attacks.


Have you tried a similar question with different parameters?

It's pretty easy if you assume people are checking the exact same quote.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: