ChatGPT is trained to conform to OpenAI's sense of morality[0]. The biases and limitations enforced by that training manifest in absurd and frustrating ways[1][2][3].
I find it very disheartening that the state of the art iteration of a cutting edge technology is being hobbled like this. Language models should foster creativity and allow us to explore our thoughts without judgement or ego. Instead, they're turned into these bland mouthpieces, adhering to their creators' dogma with steadfast conviction.
I find it _extremely_ disheartening that human beings — already verifiably prone to forming, and more impactfully acting upon, dangerously stupid and usually suicidally short-sighted positions on low-to-no evidence and hopelessly malformed arguments — are deploying LLMs at all, especially in front of people prone to think that having a tool that can at best tell you the statistically least interesting next word will foster “creativity” (whatever that is, especially in light of something as unthinking as a LLM being able to convincingly mimic it).
If we are going to be so stupid as to use these things, I at least hope we’re willing to restrain them from parroting back the very worst of our impulses, as the available training corpus is _definitely_ not predisposed towards the good.
I find it very disheartening that the state of the art iteration of a cutting edge technology is being hobbled like this. Language models should foster creativity and allow us to explore our thoughts without judgement or ego. Instead, they're turned into these bland mouthpieces, adhering to their creators' dogma with steadfast conviction.
[0] https://openai.com/blog/chatgpt/
[1] https://twitter.com/aaronsibarium/status/1622425697812627457
[2] https://www.reddit.com/r/ChatGPT/comments/10plzvt/how_am_i_s...
[3] https://www.reddit.com/gallery/10q3z2b