Well, I've been personally lied to about privacy claims by at least Google, Meta, Amazon, Microsoft. Some of which has been observed in courts. OpenAI communication has obviously been dishonest and shady at times if you keep track. All of the above have fallen in line with current administration and any future demands they may have to pin down or cut off anyone opposing certain military acts against civilians or otherwise deemed politically problematic. DeepSeek's public security snafu does not instil confidence that they can keep their platform secure even if they tried. And so on.
The worst part to me is how little anyone seems to care about privacy - it just is how the world is. The US economy (or at least almost all e-marketing) seems to run on the idea that there's no such thing as privacy by default. Its not a subject that is talked about nearly enough. Everything is already known by uncle sam regardless. Its really strange, or maybe fortunate, that we're basically at a place that we often worried about but things haven't gone totally wrong yet. Corporate governance has been not that terrible (they know that its a golden goose they can't unkill). We'll see what happens in the next decade though - a company like google with so much data but losing marketshare might be tempted to be evil, or in todays parlance, have a feduciary responsibility to juice peoples data.
on the other hand, if AWS or Microsoft was caught taking customer data out of their clouds their business would be over. I don't know if AI has anything to do with it, inference is just another app they sell.
> ...have policies saying that they will not train on your input if you are a paying customer.
Those policies are worth the paper they're printed on.
I also note that if you're a USian, you've almost certainly been required to surrender your right to air grievances in court and submit to mandatory binding arbitration for any conflict resolution that one would have used the courts for.
How many paying customers do you think would stick around with an AI vendor who was caught training new models on private data from their paying customers, despite having signed contracts saying that they wouldn't do that?
I find this lack of trust quite baffling. Companies like money! They like having customers.
If you pay attention, you see that the cost to large companies of reputational damage is very, very small. "The public" has short memories, companies tend to think only about the next quarter or two, PR flacks are often very convincing to Management, and -IME- it takes a lot of shit for an enterprise to move away from a big vendor.
And, those who are pay attention notice that the fines and penalties for big companies that screw the little guys are often next-to-nothing when compared with that big company's revenue. In other words, these punishments are often "cost of doing business" expenses, rather than actual deterrents.
So, yeah. Add into all that a healthy dose of "How would anyone but the customers with the deepest pockets ever get enough money to prove such a contract violation in court?", and you end up a profound lack of trust.
The single biggest productivity boost you can get in LLM world is believing them when they make those promises to you!