Bing won't decide anything, Bing will just interpolate between previously seen similar conversations. If it's been trained on text that includes someone lying or misinforming another on the safety of a plant, then it will respond similarly. If it's been trained on accurate, honest conversations, it will give the correct answer. There's no magical decision-making process here.
If the state of the conversation lets Bing "hate" you, the human behaviors in the training set could let it mislead you. No deliberate decisions, only statistics.