it wouldn't be a very _good_ feature because if it was, because there are SO MANY INSTANCES. Blocking one large one is not going to stop anything, and there's no pattern to domains, and .... just no. I mean yes, you could go "oh this looks like a sub-url of a mastodon server let's block that but there's side-effects to that and also it's pretty easy for mastodon devs to circumvent
I'm willing to bet Twitter has a whitelist for sites with user generated content to prevent whole domains from being hell banned if one person says something naughty. Perhaps Mastodon has yet to be added to this list?
We are in an age when incompetence and bugs have the same effect as malice, and by the time the bug is fixed, the malicious aftereffects have already taken place and irreparable.
Assuming malice by default is the very least we could do to fight back.
For example, a cop can be incompetent when accidentally firing a gun at a person, and they can be punished for the mistake, but the victim is already dead.
One story that often comes up: automated ML systems will have false positives every once in a while. But when the false positive affects a big customer it is fixed more promptly than when it affects a competitor...
One of many examples of why we shouldn’t hold corporations to the low standards we hold individuals.
I.e. never give a corporation the benefit of the doubt. They’re the most important producers of our economy, and doing so allows them to be lazy (aka the complete opposite of “productive”)
You should talk to your congressperson to voice your opinion. I agree there should be ways to get things fixed in a timely manner. I got locked out of my old Google account once as a Google employee and i got 0 help and it was frustrating, so i understand.
I'm not saying your wrong, but that just sounds like an incredible amount of work for something that would presumably be discovered anyway, you know?
Wouldn't it be much easier to simply blacklist and put out some bland statement about "not supporting 'hate speech'" or something? It doesn't matter if it's true or not, it just matters that they paint the picture of them opposing something "bad". Lying just seems a lot easier than going to all of that effort.
It doesn't require more work than they are already presumably implementing for spam detection (unless I way too much overestimate big tech's abilities), it's a little touch to an already-being-implemented system with a bonus of being able to block competitors to some extent, without direct liability.
Well it's not my job, though even by looking at spam accounts on Twitter I can easily pinpoint common patterns that can be extremely successful at preventing spam at Twitter, MUCH better than blacklisting domains.
It's dead obvious that they do not want it. The reasons why is beyond me.
Of course everyone does mistakes, but that's not the point.
Twitter is just too powerful with all those engineers and access to ML to have this just as a mistake.
Whoever is responsible for that flagging is either:
- Incredibly incompetent and not knowing what they were doing (which I honestly don't want to believe)
- Knew exactly what they were doing and it's intentional with a benefit of doubt with plausible deniability.
Unintentionally blocking a whole domain "by mistake" falls into the first point above.
Knowing track record of social media companies hiring good engineers and being evil, also knowing Mastodon being a competitor to Twitter, I don't think I need to tell much more.
Is it a competitor though? The amount of developer salary it would take to implement such a feature and push it out at Twitter vs. the close to zero threat Mastadon poses to Twitter, seems like a bad tradeoff if we're being conspiratorial.
But the difference is that incompetence is far more plentiful than malice. It's an odds game. Everyone makes mistakes, few actions people in general take are done in malice.
Is every exxon decision malice then? like opening a gas station in your neighborhood because it didn't have any? Sure it has profit in mind, but is it evil or malice?
I've been trying to make an email with a gTLD my primary personal email address. Its so annoying having so many forms say "that's not a valid email!" just because they didn't get the message things like .ninja, .dev, .duck, .fun are all valid TLDs these days. No, their form says emails can only be .com, .net, .org, .mil, or maybe a handful of ccTLDs.
> Please build a system with 100% availability under load and with no false positive. I'll wait.
Seriously? This is absolutely trivial: you don't build a system like this in the first place, as whatever is failing here--and 100% if it is a system actively designed to prevent people from posting links to their own timeline--isn't providing value: it is just a bigger surface in which there are going to be random bugs and false positives. I will thereby repeat: building systems that occasionally set themselves up for mistakes IS THE MALICE. I am not claiming that making a mistake is malicious, or that only malicious people make mistakes: we all make mistakes, so we should thereby do everything we can to limit our power and refuse to build systems that might fail.
(edit: I added a paragraph here but it came after a reply, so I moved that paragraph--without further edit--to my comment reply to that reply comment.)
A link preview adds value to those reading the posts. Because it can sometimes fail to process a link and prevent users from posting links is not a sign of sure malice, it's possibly incompetence (in the way the system was designed or implemented, in the business rules of requiring it to validate successfully before a link can be posted, etc.).
I'm a little lost at what your point is because if you think the above is malice, there's malice in every poorly implemented feature. Say Instagram does a rough check to ensure no nudity is contained in the images you're posting and that system goes down preventing you from posting to your own IG -- that too is malice?
Actually, a fraudster can design their system such that when preview service is the one rendering the page, it returns an error, but when a legit user is making the request, it may proceed showing bad stuff.
This is known as cloaking.
Safe thing to do is to block the submission instead of letting it through anyway, or potentially downrank the tweet, or graylist or whatever, or put it into failed queue for investigation, or something along thoselines.
I think crowd here would be delighted when twitter allows spam links because they figured out a way to fool the preview service in some ways.
The idea that you believe that a system I am saying shouldn't be built will somehow have a bug in it despite not existing is ridiculous. I will once again repeat, therefore, with more context: I am not claiming one should or can build systems without mistakes... I am claiming that, because we make mistakes--and because we all know that we all (including myself) make mistakes--we should not build certain kinds of systems, and that building systems that put us in a position where making mistakes cause these kinds of effects "is the malice".
Regardless, if it IS a mistake, we can assume Twitter will offer a formal apology indicating that they made a mistake that accepts fault for the impact it had on others, explain how they think that mistake occurred in enough detail that we can agree that it was a mistake and not an excuse, they will take steps to prevent that similar mistake from happening in the future, and they will further attempt to "make it right" somehow to the mastadon community (particularly so, as this was a competitor) with some extra benefit or concession... right? I will note that going around making mistakes and neither apologizing for them not taking responsibility for them--instead just flippantly hiding behind "we all make mistakes" while continuing to put yourself in a position where your mistakes matter to people--is itself a malicious act.
Every feature has a complexity burden it imposes on the overall system, and also a risk profile to causing failures as the overall environment changes.
Some features are definitely not worth the complexity they impose. Sometimes a feature is so important you need to add it despite the systemic risk that it obviously carries.
Most of the time the designer or implementer doesn’t stop to even consider this analysis and just fires off some code to add the feature and lets it become “technical debt” because it’s SEP at that point.
The best engineers have an immediate intuition about these types of features and a visceral reaction when asked to code them. Something along the lines of, “I have a bad feeling about this” to “No way I’m coding this, it’s a metaphorical bridge just waiting to collapse.”
If malice requires intent I think most of the time there isn’t actual malice. If malice can stem from ignorance then I fully agree with you.
Nowadays, maybe let's assume malice. Maybe its a false positive and a bunch of people stop using twitter, Facebook, google, apple, or Microsoft as a result. That is less harmful (some might say it's good) than the opposite, where big corps use our collective desire to forgive as a tool to exploit us.
You can apply that saying to your wife and kids. Or your neighbour. Big corps that make money off exploiting peoples negative emotions like twitter? No. I assume malice.
Hanlon's Razor is, I believe, a good rule of thumb, but people keep using it in situations such as this one where it's not just potentially malice (wanting to harm someone for little other reason than wanting to see them hurt or wanting to hurt) but also greed/self-interest. Hanlon's Razor as stated makes sense to me. I see a lot more stupidity in the world than I see people wanting to hurt for no other reason. But greed? I see a lot of that.
Yeah, I think Hanlon's Razor was intended to apply to individuals, not organizations. Often the "stupidity" of an organization is a shield for malice or greed.
Organizations are made up of people. When you have 5000 people making small decisions with 0.1% possibility of being wrong, if have 40% of chance having at least one mistake over 5000 decisions.
People are not independent coin tosses. They coordinate, manage, overrule, scheme, discuss, and process information in strategic ways that render these kinds of extremely simplified models impossible.
Why do you think a bug is a "simpler" explanation for this behaviour? Maybe if all links were broken then sure, but if only specific sites that operate against the interests of Twitter stop working doesn't that point to something else going on?
An algorithm may have incorrectly marked a site as malicious, or crawling might be failing because of an malformed http header etc. Who knows...
When there is feedback loops involved (in the context of an integrity system, for example), it doesn't outright block everything. In the past work i had our systems blocked some publishers traffic wrongfully, while not blocking others when it should(i was in display ads).
I would qualify this by saying the priors matter. Twitter has torched their credibility in a flaming dumpster fire. That changes how one might make assumptions.
It also just looks suspect. Mastadon is a competitor to twitter, right? It's obviously good for twitter that no one can post links to the mastadon sign up page, right? Do I still have to assume it's just a bug?
I sure wish the bugs in my code ended up benefiting my company :)
And not long ago Facebook was a very small surface for MySpace. And Reddit was a very small surface for Digg, and so on. Network effects can cause competition to rapidly snowball.
It just doesn't hurt to assume greed exists and big corporations are behaving greedily. Otherwise you're either in denial, or walking around perpetually shocked.
So do you really think a team at twitter sat down and thought, lets block mastodon for a day to avoid some customer bleed, then we can do damage control, but it will be worth it in the end?
Pleasantly surprised that you got 8 replies saying that things are often working against us. It’s true at least some of the time. And depending on your social status, that “some of the time” can become all of the time.
Competitors have no obligation to play nicely. Is it really a surprise they don’t?
Or do the correct thing and be somewhat skeptical but also logical, and think of things in context, weight potential causes for an issue and pick the likely one.
Speaking from experience:
1. size of mastodon is such a small thing that i doubt twitter even worries about it, among other things they have to worry about - like monetization.
2. My guess is integrity system incorrectly marked a url or link crawler/preview/whatever service is failing to crawl the page, or something along those lines. It can be something else, but my overarching claim is that it was not someone going into admin UI and typing mastodon to block them explicitly to cut the competition out. It may still be a lowly paid employee incorrectly doing something, but it's not for cutting the competition out and very very likely an honest mistake.
If you go over to the Wikipedia article for "genocides by death toll" we currently have 2 ongoing - one in Sudan, one Myanmar. And possibly Xianjing although that doesn't meet Wikipedia's standards. There are really few limits to how nasty a group of humans can get after they identify an in- and out- group.
That is genocides, in 2022. The idea that a social media company is out to crush competitors and make money is really not that far out there, and certainly not an idea that can be reasonably mocked. These companies are going the way of the more corporate news organizations - they've got influence, they want to be political actors. They are not trustworthy.
Not everything needs to be an absolute rule, this totally looks like a small bug on Twitter's part, and if it takes like weeks to fix then you can decide "okay that's a little more suspicious". What's the point of making this particular moment an immediate, gigantic moral issue?
I prefer the other way around. Twitter is probably acting bad. If they update it immediately, apologize, then we can assume its not an issue. What value do you get by giving them the benefit of doubt? You just give them a way to gently manipulate you if you assume every little thing that's bad is an accident.
The alternative to "Twitter is acting bad" doesn't have to be "Twitter is completely innocent", it can be "Huh that sucks, wonder what caused that". You don't have to make an immediate judgement call on the morality of every tiny bug.
The value I get from that is not spending energy judging everything immediately as soon as I see it, which, in a world with social media, is a significant savings.
It seems like people many people today aren't satisfied with saying "I don't know" when it's the most reasonable answer. Every little action apparently has to be blamed on something (usually based on very little or no evidence).
It doesn't even matter why as long as twitter is aware of it, and it gets fixed soon.
It's not a bug to disallow "distribution of hacked material" in your Terms of Service.
If you think it was political, you should certainly inform the FEC! They found that Twitter had acted "for commercial reasons and not for the purpose of influencing an election" so they'd appreciate the heads up.
Which would've been more believable, if they literally did nothing about the thousands of tweets sharing the hacked Trucker rally database. And it wasn't just random accounts, a lot of major north American/Canadian newspapers shared articles that not only mentionned the hacks and hinted at where to find them, but also explicitly detailed the contents of the database.
Yes that's the go-to copypasta reply. ("Literally" here meaning "not literally.")
Twitter changed their policy in response to criticism of the Biden tweets to "no longer remove hacked material unless it’s directly shared by hackers or those working with them" including the rally database, and they restored previously removed tweets about Biden.
That's very convenient! The problem is that they didn't enforce that policy before the laptop leaks either. Not in any significant way and no media outlet ever got their account banned for reporting on a hack before (or after).
The ~7000 QAnon accounts they banned don't fit your narrative I'm afraid.
There are now several intermediate steps they've added since then, like clear warnings [1]. This allows the account to stay active and still report on stolen data. Much better IMO, banning accounts is common practice, but too heavy-handed as to be the only tool Twitter has.
ah yes, because when a federal agency doesn't punish a multi billion dollar corporation, we all know it's because they definitely didn't do anything wrong. They're so trustworthy!
Please refrain from replies without content. Why did the FEC fine the DNC and Clinton for violating rules related to the Steele dossier? My understanding is that the DNC also has a few dollars. Did they forget their bias?
My point isn't about the FEC being biased, I just reject appeals to the authority of US federal agencies, because they aren't trustworthy. They don't (meaningfully) punish rich people, and they are usually staffed by various former/future C-level people from the organizations they're supposed to be regulating.
No. This is different and smells either a crawler that can't crawl a page for integrity purposes, or it can crawl and somehow the url got blacklisted.
I assume hunter Biden story is more of the "disinformation" moderators blocking wrongfully, or them shutting down Wuhan lab leak theory, but this smells very different.
Affected are https://mastodon.social and https://mastodon.social/about. Linking to actual user content actually works, so unlikely a problem of user generated content violating TOS but Twitter pro-actively avoiding supporting the platform.
This is the kind of thing that's interesting to document for a potential anti-trust case. Blocking the entire domain because it's "spammy" could be argued as fair - but only blocking the main links is more suspect.
At least traditionally, you don’t really have to clear a particularly impressive bar to be found in violation of antitrust law, mostly just behaving in an anti-competitive manor.
See this is where it's confusing to me. What's the difference between anti-competitive and regular business behavior?
When I go to X department store, they'll happily look up how items are priced on their website for price matching, but they won't look at the website for Y department store to whom they would also price match. Is that anti-competitive?
That's a long way around to what level do businesses have to help you in sending you to a competitor?
I think most people would agree that there's no way to compete with the social media giants. Regardless if it has better features, thanks to the network effect, Mastodon will always have a small fraction of users compared to Twitter.
US antitrust's main consideration is price, the EU's is price and "fairness" to competition.. so, maybe the EU could get involved but the US probably never will.
> We can't complete this request because this link has been identified by Twitter or our partners as being potentially harmful. Visit our Help Center to learn more.
Who are these partners, that apparently have the ability to block links on twitter?
Edit: attempting to save the tweet as a draft results in a different error: "The content of your Tweet is invalid."
they probably partner with some content filtering third parties for stuff like abuse blocking, child porn, copyright infringement, spam, you name it. I am wildly guessing that they outsource some of that stuff.
This is likely the same reason that sometimes links on FB can't be pasted. There's a spike in usage and some anti-spam mechanism blocks it. Which makes sense, but sometimes catches the wrong URL just like every spam filter.
Yes, facebook does this a lot. However, I have noticed cases that are very possibly not ignorance. For example, if I post a link to a domain I own (meaning a reputable one), but the subdomain contains emojis, it cannot process it at all and lags out trying to redirect to the page. However, I have never had a single problem clicking on an (emoji).y.at link, a company that spends money almost entirely on facebook and instagram ads.
I don't like to be conspiratorial, but it did get my gears turning. And I wanna market with my emoji urls!
Twitter is one of the platforms where you can still customize the experience pretty well. With custom lists and a third party client you can basically use it like an RSS reader if you want to.
Their API you can pretty much do anything pretty easily, especially now with the new V2 API that expose a lot of features that were not available through an API just a few months ago. For example Twitter Bookmarks.
I had a good experience using that a few months ago to build a service that sends you a weekly email of your Twitter bookmarks to your email inbox because I always forgot about them:
To each their own. Personally I feel like I'm getting value out of following what certain people are talking about on Twitter. The time where everyone had their personal blogs that you could follow are unfortunately over.
A lot of people here are jumping to conclusions based on what could simply be a bug. Why not wait 24hrs and see if this is actually some policy or just a mistake.
This was, is, a common refrain as Google slowly but surely ensured that Firefox users would run into blocking issues and mysterious performance bugs, every week, that disappear with merely a user agent change. Mighty convenient a supermajority of the time, these bugs.
At some point when your scale is big enough and you are the dominant incumbent, whether something is due to wreckless accidents or intentional or "blind"/ignorant malice is a moot question. The responsibility is (and so too should the punishments be) the same. That must be treated as the necessary cost of operating at this scale.
It's a convenient excuse that IMO can actually be the truth, even if I am aware of the Hunter laptop and other censorship (sorry, disinformation prevention) cases the company had.
I can see how some shady content had a similar URL/behavior that accidentally got this flagged (though I'm not a mastodon expert, so I might be wrong).
I'd wait a day to see if the problem gets fixed. Even if they fix it, we will never know if it was a real bug or they hoped there wouldn't be backlash for this "feature".
Yes. I tend to assume the people who are quick to think conspiracy and/or malice in these kinds of situations are either students or nontechnical users. If you've been working as a developer in the industry for a while, you've seen bugs like this many times.
The logic of social media pileons is too pervasive at this point. As you can see upthread, a lot of people genuinely believe that they’re morally obligated to join these kind of pileons precisely because there’s not much information available; if we don’t apply pressure, there’ll never be a full investigation and they will get away with it! (And I very much include myself in "people" here, I've been there before many times.)
I think you just have to accept these days that when you go on an online platform you’ll see a few outrage mobs gathered around some tiny scrap of information. It’s like walking into a nightclub and asking people why they won’t drink more responsibly.
Even if it isn't a bug, there's probably enough malicious stuff going on linked to these domains that it's justified. Even if the sites themselves are not producing malicious content, they are targeted by bad actors because the platform lacks the tech to detect and moderate said bad actors. And lets be honest, users of this platform are likely to be very susceptible to misinformation and manipulation.
(The suit was filed against Adobe, Apple Inc., Google, Intel, Intuit, Pixar, Lucasfilm and eBay, but it turned out that "dozens" of other tech companies got involved as well (Pando Daily had good coverage https://web.archive.org/web/20200304045453/https://pando.com... ))
I figure these guys will do whatever they think they can get away with.
You are mixing one issue with another. I am making a very specific claim that this issue is not malice, either a system with false positive or a system that has a bug.
FWIW, I agree. You're almost certainly right that this is just a false positive or a bug.
I'm just pointing out that the "evil corps" stereotyped has been earned by past events. The folks who are suspicious of corporate behaviour and motives have their reasons.
Once we make peace with the fact that employers are not family, and big corps are in it for profit, some level of skepticism is warranted.
I am mostly thinking attributing _everything_ to malice or profit is wrong. Even then, nearly almost everyone here is using one or the other big corp product for free or paying for it, which feels ironic too. Like yeah, evil, but i will continue to use it is a funny concept. Accept things with flaws, call out stupid or malicious behavior, just don't think everything is evil
Part of the problem is that different folks have different experience with and ideas about the folks running big corps, so we wind up over-generalizing and talking past each other. I did it myself above when I used "these guys" to mean a handful of bad actors, not the whole of all those companies' executive suites.
> Like yeah, evil, but i will continue to use it is a funny concept.
"We can't complete this request because this link has been identified by Twitter or our partners as being potentially harmful. Visit our Help Center to learn more."
Remembering that this is happening on a platform where many celebrities' accounts were totally compromised, I wouldn't be super surprised if they've implemented client-side filtering.
Maybe there was some content on mastodon.social they didn't like. As they don't integrate with ActivityPub they can't block individual users so perhaps they chose to block the whole domain instead.
It's not like they're cutting off the whole Fediverse. There are hundreds (thousands?) of mastadon instances and none of the others will be affected by this.
It is generally known in the artist circles I hang out in that Twitter’s algorithm loves to hide links to Patreon. They may also hide tweets that merely mention it, as well as other words crucial to the business of being an artist online like “commission”, which leads to creative misspellings like “Half off on c!miss!ons for my Petreon supporters this week!”.
(Twitter loves to hide off-site links in general. Leaving the site means you’re not scrolling down your timeline and potentially seeing ads any more!)
Enough of my artist friends who pay closer attention to their analytics than I do have done tests and seen much lower engagement on offsite links that I take this as how things work but I can’t give you any particular links, sorry.
This most likely scenario is that over a decade ago, someone at Twitter copied a regex from Stack Overflow for URL validation and it has been there ever since.
Lol. Intent matters, context matters, and in this case it's not intentional. also please show me the law so we can discuss where you are missing the context.
In other words, you want every website to be held liable for all user generated content on their platform, which would benefit every large social media (because smaller ones wouldn't be able to afford to moderate every single post), leading to them becoming even more of a monopoly?
Or do you have something concrete in mind on what to replace it with? Because just repealing it would lead to a much different Internet than it is today — and not in a good way.
>In other words, you want every website to be held liable for all user generated content on their platform, which would benefit every large social media (because smaller ones wouldn't be able to afford to moderate every single post), leading to them becoming even more of a monopoly?
In the EU it works well: the only social networks that exist are American :P
> Or do you have something concrete in mind on what to replace it with?
In the EU that would be Digital Services Act, chapter two of which offers websites similar protection to the Section 230. It's gonna be voted on soon™.
At the moment the EU has precedence in the opposite direction, with an Estonian news site Delfi having been held liable for anonymous user comments made on their platform: https://en.m.wikipedia.org/wiki/Delfi_AS_v._Estonia
I don’t know what section 230 repeal has to do with it. Companies would still be able to block random links like this if section 230 is repealed; they’d probably do it more often.
Repealing 230 would literally kill large swaths of the internet. If platforms like Twitter or Reddit were legally responsible for what their users post, they would need to institute white lists to avoid being bankrupted. Can you imagine the amount of low-level casual defamation across the entirety of Facebook? Or Reddit? The amount of copyrighted material stolen and posted? There would be an overnight industry of people shadow-posting defamatory statements and then turning around and suing the platforms for hosting it.
Unless the aim was literally to destroy these large left-leaning, people-empowering, highly profitable, internationally reaching businesses.
Now imagine which political party and which geopolitical adversary would like to make the internet less liberal, silence the powerless, make American tax revenue lower, and make more space for authoritarian voices on the world stage.
Facebook, twitter, etc left-leaning in the US sense where left is right and right is near the edge of the right spectrum I get.
But people-empowering? These large platforms are very much anti-democratic and anti-empowering people. Even more, subverting peoples wills about what they buy by managing what they see and algorithmically trying to guide what they should think is very much anti people.
Any people empowering is merely a side effect and not at all the purpose
I see what you're saying, and there's definitely reason to question which entities have gained the most influence from social media (out of corporations, governments, or activists).
As for "the purpose" of social media, I suppose a simple reductive answer would be "to make money", but there were (and I think still are) people who are idealistic about social media giving a voice to the voiceless, even if you think that, in practice, commercial messages and misinformation end up having a bigger and more negative effect.
A thought experiment I think is useful is to imagine how Western political discourse could have been shaped (by politicians and the traditional media) if grassroots movements couldn't form and raise awareness through things like "hashtag activism".[0] The fact that authoritarian countries invest resources in censoring or banning social media also seems to confirm that they see it as a threat rather than a harmless distraction.
Heck, under the precedent that Section 230 was meant to overturn, Twitter could sue HN for a lot of what people are saying right here. I struggle to even engage with the idea sometimes because you're 100% right that it doesn't make sense.
No, this is arguably "monopolization", which is illegal under the Sherman Act. Even the 9th Circuit, which has adopted an implausibly broad reading of 230, does not think that 230 is a defense against antitrust claims.
The word monopoly unfortunately invites all sorts of pedantry about whether a company is a monopoly (and it goes on endlessly because people can play a game of quibbling about exactly what 'the market' is). Twitter is obviously a powerful player in the social media market, whether or not it is a monopoly. You don't need to be a monopoly to engage in anti-competitive behavior.
Yes, it is often pedantry and not offered in good faith, but in fairness, the legal definition of "monopolization" requires that you either have a monopoly (defined, loosely, as a high-enough marketshare), or, if accused of
"attempted monopolization", that you have a dangerous chance of achieving a monopoly. As you note, that requires defining the "relevant market."
I guess I just find it annoying that people want to quibble about "relevant markets" and that kind of stuff, when they could clear up the more likely confusion, which is basically just vocabulary, instead.
What "Monopolization"? Mastodon is not competing against either Facebook or Twitter. It is so remote that it has already failed after 6 years with the rate it is going and still unable to take even 10% of users from either of these platforms. The same is also true from the likes of Gab or 4chan which are also not 'competitors' or alternatives to Facebook or Twitter at this point.
It is really not early days for either of these 'alternatives'.
> After all, it’s their right to as a private company.
Why do people always use the term "private company"? Twitter is a publicly owned corporation. If the Musk deal goes through, it'll be private. But I'm not sure what work the word "private" is supposed to be doing.
"Private company", in this context, means "not the government". The opposite would be "public sector", referring to the government or companies it owns/runs.
Different rules, laws, regulations, etc. apply to private sector vs public sector.
Specifically in this case, OP is implying that a theoretical public sector social network wouldn't be allowed to block linking to mastodon because of the second (EDIT: no, wait, first amendment!) amendment, but that Twitter is well within its rights since they are in the private sector. Whether that's true or not, I dunno.
I know I mistakingly had the second amendment written as the reasoning in my previous post when you had replied, but I meant the first amendment.
Like I said, I don't know how far the first amendment would or wouldn't apply in this situation. In any case, it would likely have to be tested in court.
The private nature of the company is the specific thing that matters here. It is right to emphasize that through use of the adjective. You’d better ask “why use the word ‘company’ at all” and the answer is that it’s a useful noun upon which to supply the critical adjective.
well, in the US, the First Amendment would start come into play in interesting ways, depending on whether the content itself is considered an expression of the government itself or just general.
The government controls some things. Companies control some other things. Gov things are 'public', company things are 'private'. I don't know what you are trying to argue. This is how the west works 101 stuff.
Helpfully, the US Government, specifically the SEC, defines the term “public company” for us:
“There are two commonly understood ways in which a company is considered public: first, the company’s securities trade on public markets; and second, the company discloses certain business and financial information regularly to the public.
In general, we use the term to refer to a company that has public reporting obligations.”
> The government controls some things. Companies control some other things.
The government actually controls companies. In fact they wouldn't exist as legal entities without the government.
I feel there is a common misconception that "private companies can do whatever they want", belied by the existence of government regulation, but nonetheless a misconception regularly employed to dismiss antitrust concerns, for example.
I don't think anyone is unaware of the concept of law. There are significant 'firewalls' between the public and private sectors, otherwise we would just have facism/communism or anarcho-capitalism.
I think there's a kind of ideology behind the phrase that personifies companies, giving them "rights" that become even more important than individual human rights. And there's the weird idea that companies can do whatever they want, despite the mountains of government regulation of corporate behavior.
Almost every use of "private company" is in contrast to a publicly traded company. Except when people start talking about "rights", and then they're apparently using it in a different way, which I find very odd and ideological.
It's too late to edit the above comment, but let me clarify:
When I ask, Why do people always use the term "private company", this is somewhat of a rhetorical question. I'm not asking for an explanation of what "private company" means, as if I were an ignorant simpleton. That's the uncharitable reading of my comment. Rather, I'm asking for the justification of the use of "private company" as a rhetorical term in the defense of the actions of corporations. I see this rhetorical usage constantly.
Everyone knows that Twitter is a company. That Apple is a company. That Google is a company, etc. And everyone knows that these companies are not run by the government. None of this ever needs to be explained in an argument. So the question is, what rhetorical force do people think the term "private company" gives them in defense of corporate actions? Or maybe the question is, what's the psychology behind the use of this form of rhetoric?
There are various government regulations that do or could be applied to these companies. Antitrust is very relevant for the tech giants. Communications Decency Act. Whether these regulations actually will be applied to these companies is another question, but it's a legitimate debate that can't be simply dismissed by "private company, anything goes".
It's a publicly traded company. Not a publicly owned company. The owners of twitter are primarily private citizens/corporations, not public/government.
> But I'm not sure what work the word "private" is supposed to be doing.
I think that part is tongue in cheek. Because that's usually the argument repeated ad nauseam whenever anyone criticizes social media speech moderation policies.
But as other comments have already mentioned, you are confusing two different meaning of private/public.
Of course you can link to anything; I was talking about extracting useful information in a program for a richer link preview. A tweet is just a short text or a picture; it is trivial for twitter to have title, og:description, and og:image in their html like everyone else do.
Ok? They are a private company and they can block links that they don't like. Facebook has done the exact same thing and nothing happened.
You are free to complain, criticise them, scream at these companies or protest, but like I have said many times before they will not change and it is futile to believe that they ever will as long as they 'moderate' or do whatever they want to do on their own platforms.
They're a private company and it's a shitty thing to do. No one is saying regulation should necessarily step in, but we don't have to like it and we shouldn't not complain and not talk about it just because of that.
I've been hearing this a lot lately, surprisingly from people who consider themselves to be politically left. I say surprisingly because historically, leftist ideology has been quick to solve problems with corporate behavior with regulations, whereas traditionally it's been the right pushing laissez-faire.
In any case, guys - private company doesn't mean your behaviors and policies aren't constrained. There are innumerable laws that regulate corporate behavior and these can be expanded at any time. Corporate behavior can be constrained without limitation if the public so chooses.
"I've been hearing this a lot lately, surprisingly from people who consider themselves to be politically left. I say surprisingly because historically, leftist ideology has been quick to solve problems with corporate behavior with regulations, whereas traditionally it's been the right pushing laissez-faire."
On this web site when I hear people talking about controlling what social media platforms do via regulation, most of the time it seems to me like passing a law where the government says the following journalists must have articles each week on the New York Times front page. (And none of them are actually going to be journalists so much as "journalists")
Other people see it more like passing a law saying the telephone company can't block your telephone access.
I guess my point is framing every discussion of social media as "regulation good or bad" isn't very useful.
Fortunately there are a couple of bright lines that can be drawn between those two hypothetical laws.
Firstly, there is only one front page of each edition of the NYT, and the page has a very limited data/readability capacity. On Twitter, every user's page is a front page if you want it to be (although admittedly the recommendation of people/tweets by "the algorithm" starts to feel like an editorial decision, which invites further scrutiny).
More significantly, though, once an organization has built a social or telephone network, it is actually more expensive to disconnect or delete messages based on the opinions contained within them than it is to just allow them to continue moving between users. This is in contrast to the (physical) NYT, where editors necessarily have to whitelist every piece of content, and forcing articles past that whitelist has significant cost implications (even if no one reads them).
Definitely. The act of incorportation has one goal: to abstract away risk and liability. Human people controlling the company will not be held liable for their actions or debts. But this comes with the trade-off of having to accept regulations.
If you want to have full control over your property then do not incorporate and give the property to the corporate person. I can block links to mastadon on my truly private (ie, owned by a human person) website without consequence. "Private" corporate persons have more regulatory burdens for obvious reasons.
People being purely hypocritical when it benefits them is an all too common theme these days. As far as politics go, the only goal is power. Any steps towards gaining more power are right and just, regardless of how hypocritical they may be.
When people talk about "private companies" they're referring to entities that are not government-owned/controlled. Being listed on the stock market does not change that.
Privately-held companies, to be specific. But context is important, so if we're talking about finance then public/private company is perfectly acceptable.
It seems like you're working overtime in these comments to deny that the context in which we use the phrase "private company" matters. This is a pretty common usage, analogous to "private good" vs. "public good"; surely this discussion on HN is not genuinely the first time you've ever been exposed to it?
> surely this discussion on HN is not genuinely the first time you've ever been exposed to it?
Of course it's not the first time. That's precisely why I'm making a big deal out of it, because I've seen it over and over. I'm suggesting that there's a kind of unwarranted ideology behind the usage that doesn't accurately reflect the legal reality, and gives the impression that companies have unlimited rights when in fact they're strictly regulated in countless ways. https://news.ycombinator.com/item?id=31609113
We could talk about Section 230, for example, which is obviously important with regard to Twitter and whether it can operate.
It feels to me like people are "working overtime" to handwave about "government owned companies", of which nobody has given any examples. Nor have they explained why a government owned company wouldn't have the right to block any links that the government wanted to block. If anything, a government owned company would probably be treated as exempt from some rules that apply to "private" companies.
Aside from all of the additional accounting/reporting/etc. burdens placed on publicly-traded companies, there are no differences in legal requirements between publicly-traded and privately-held companies.
> Nor have they explained why a government owned company wouldn't have the right to block any links that the government wanted to block.
It's the First Amendment. The USPS can't refuse to deliver a postcard because they don't like what it says, for example. If the government ran a user-generated content platform (which they shouldn't, and probably never will) they would be bound by the same requirements.
> Aside from all of the additional accounting/reporting/etc. burdens placed on publicly-traded companies, there are no differences in legal requirements between publicly-traded and privately-held companies.
Not sure why you mention this when everyone in the comments here insists that publicly-traded and privately-held companies are both included under "private company" in this discussion.
> The USPS can't refuse to deliver a postcard because they don't like what it says, for example.
Obviously the receiver rejecting something is entirely different from the USPS rejecting something. How do you not see that, and why are you doing this?
> Obviously the receiver rejecting something is entirely different from the USPS rejecting something.
How is the difference relevant? Also, the USPS can refuse to deliver child porn. Indeed, Twitter itself must remove child porn.
> why are you doing this?
I think that "theoretical government run social network" is a total red herring. Everyone in the comments is trying to tell me that "private company" is somehow a useful phrase to distinguish from "government run company", and I think the phrase and the distinction is doing zero work. I'm just following along with the game and trying to explain that the government/non-government distinction doesn't even help the argument.
The fact that Twitter is not run by the government is irrelevant, and saying that it's a "private company" doesn't magically exempt it from government regulation that may affect what it can and can't do.
It's kind of funny, when a big corporation does something terrible, half of the time the defense is "a private company can do whatever it wants", and the other half of the time the defense is "the lawyers made them do it", which is the exact opposite and contradictory idea.
“There are two commonly understood ways in which a company is considered public: first, the company’s securities trade on public markets; and second, the company discloses certain business and financial information regularly to the public.
In general, we use the term to refer to a company that has public reporting obligations.”
Never assume malice when it can be sufficiently explained by a bug or incompetence.