Hacker News new | past | comments | ask | show | jobs | submit login

Don't they test the models before rolling out changes like this? All it takes is a team of interaction designers and writers. Google has one.



Chatgpt got very sycophantic for me about a month ago already (I know because I complained about it at the time) so I think I got it early as an A/B test.

Interestingly at one point I got a left/right which model do you prefer, where one version was belittling and insulting me for asking the question. That just happened a single time though.


I'm not sure how this problem can be solved. How do you test a system with emergent properties of this degree that whose behavior is dependent on existing memory of customer chats in production?


Using prompts know to be problematic? Some sort of... Voight-Kampff test for LLMs?


I doubt it's that simple. What about memories running in prod? What about explicit user instructions? What about subtle changes in prompts? What happens when a bad release poisons memories?

The problem space is massive and is growing rapidly, people are finding new ways to talk to LLMs all the time


Yes, this was not a bug, but something someone decided to do.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: