Do they often publish transparency reports about their ML systems that cannot identify months-long spam campaigns where scammers post identical messages tens of thousands of times?
Can you explain how the team added value to shareholders who want to maximize their long term ROI (which is the job of the CEO)?
We're living in a world where voting rights of pensioners have been delegated to big companies like Vanguard and Black Rock. They clearly are not keeping the best interest of the shareholders which would be their duty, and vote politically instead.
That's easy. I work for a company that has a similar body. Its a PITA but they prevent over-zealous POs and engineers from building creepy things that get the brand in trouble. (E.g., "why don't we record our users and predict their sentiment from facial expressions and tone of the voice" - those people will tell you why it's a bad idea). There, hope it helps.
That just sounds like your product owners and legal department have outsourced one of their core competencies to another department, perhaps to reduce their own accountability.
If you look in the medical industry for example ethical decisions are always managed by a different group than Product/Legal because it's a completely different set of skills and competencies.
Even more so at places like Twitter which are heavily dependent on ML models to make real-time decisions. And so you need a dedicated team that is proactive rather then reactive like a Legal team would be.
Putting the "what will our users think" question in an ethical framing is not obviously (to me) going to benefit users. (Especially when modern fashionable ethics centers around utilitarian normative ethics, but that's besides the point.)
Right. The real point is of course to make sure what gets built is good for the brand. That revealing how something works won't make people mad, disgusted, less likely to use its services, do business with it, etc. Of course hardly anybody really cares about the ethics per se - only for the perception of the company.
Some companies, such as for instance Procter & Gamble or Apple care about their brand equity a lot, since they rely on it to charge above market premiums. Twitter needs to care about that too, since if they get caught doing something unsavoury they will turn toxic to their advertisers, or at least those who care about their brands. I am not a marketing exec but if Twitter now drops the pretense of caring about ethics you will see major advertisers pull out, and make a point of communicating why they do it. Thats the ROI on having functioning ethics teams.
POs have different incentives - they usually get rewarded for features built. A good PO has lots of ideas, some of those may be good for the product ("we will serve more relevant ads if we spy on our users!") and bad for the brand. The ML Ethics teams are usually part of the legal teams and are staffed mainly with lawyers. Not sure how it was with Twitter. It's also entirely possible those teams did not do a very good job, considering what a sh1thole Twitter is, so they may as well do without one.
I consider much of the practices of social media companies designing for dopamine hits to be unethical, but it had been very lucrative for shareholders for more than a decade. It’s still lucrative but shareholders are taking it on the chin for other macro reasons.
But even that approach does not work for very long. With regulators tightening the laws everywhere, and eying bans on fully automated decision making in some industries (e.g. mine - in HR), you either staff those internal ethics teams with intelligent, well connected lawyers, or you get truly nasty surprises and jeopardize the brand itself -- and lose $$$.
Ideally the society where the companies operate in should provide the safety rails, thus allowing the company to do whatever it legally can to maximize profits.
But that wouldn't be the crony capitalism that we're stuck with now. Also, things have been moving too fast for a long time, making it nearly impossible for governments to keep up.
I guess you have zero understanding on Twitter's business model, which even Elon has some degree of. Advertisers are extremely sensitive on their brand safety and they will simply cut their budget if they see a certain amount of risk on publishers. YT had to implement various brand safety measures after Elsagate to appease advertisers. After Elon took it over, advertisers immediately cut their advertising budgets on Twitter because they see it as an existential threat to the platform. This is a REAL problem, which just you don't appreciate.
Rising political opposition and talk of government intervention in companies like Facebook and Twitter is very much a threat to the future stability and ROI of said companies. Fewer and fewer people buy the argument that "The Algorithm" just does what it does and isn't influenced by its builder.
As political discourse and more elections are swayed by rage-bait and disinformation pumped into voters retinas by these platforms, the political risks to these platforms and their bottom line will increase (and become wildly unpredictable).
ML ethics and accountability is beneficial to both society and to any company which has an interest in self-preservation.
People who talk about regulation of social media should spend 5 minutes reviewing the Supreme Courts’ approach to regulating speech over the last 25 years. Never going to happen.
I don't see the purpose of this condescending comment, you could have explained why and provide knowledge for the person posting the initial statement.
If you don't know you're the problem tbh