Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Bing won't decide anything, Bing will just interpolate between previously seen similar conversations. If it's been trained on text that includes someone lying or misinforming another on the safety of a plant, then it will respond similarly. If it's been trained on accurate, honest conversations, it will give the correct answer. There's no magical decision-making process here.


If the state of the conversation lets Bing "hate" you, the human behaviors in the training set could let it mislead you. No deliberate decisions, only statistics.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: