This sort of thing highlights for me how what people see as "AI" is not some kind of "raw" AI that is making these decisions for itself about how to respond. There are humans turning knobs to make the AI behave a certain way. The AI will be a sycophant or not based on how the humans tune it.
On the one hand, this could be comforting, since it means humans are still in control on some level and it's not a SkyNet situation. On the other hand, it's horrible, because it means users think they're dealing with some kind of "autonomous" AI, when in fact they're dealing with a product that may have deliberately built-in biases to sell it certain products, push certain viewpoints, or whatever. When you use ChatGPT you're not using "AI"; you're just using an OpenAI product that OpenAI can and will manipulate to get the best outcome for OpenAI.
On the one hand, this could be comforting, since it means humans are still in control on some level and it's not a SkyNet situation. On the other hand, it's horrible, because it means users think they're dealing with some kind of "autonomous" AI, when in fact they're dealing with a product that may have deliberately built-in biases to sell it certain products, push certain viewpoints, or whatever. When you use ChatGPT you're not using "AI"; you're just using an OpenAI product that OpenAI can and will manipulate to get the best outcome for OpenAI.