Interestingly enough, Justice Thomas who wrote the majority opinion in the 2005 case that leaves it to the discretion of the FCC to determine if ISPs should be "utilities" now regrets his opinion and has suggested that the court review the implications of that ruling.
Social networks we have a choice. I have lots of friends and family who choose not use certain social networks and the one that he's referring to is Twitter which isn't even in the top 10 of social networks in the US. Most of the USA don't have a choice in broadband providers.
Public utilities are generally governed by public utilities commissions. These commissions massively control the running of the business. They can affect pricing, delivery, standards, etc. etc.
Denying or allowing service is but a tiny part of the equation.
Citizens United (a 5-4 ruling along party lines which Justice Thomas voted in favor of) established that corporations have the right to free speech, which explicitly includes speech that can influence elections. I wonder if his thoughts on that have changed, but that is unlikely.
The funny (and sad) thing is, this whole "large corporations can do anything they want" ethos in place today is a very recent shift in American society, steadily pushed by "Reagan conservatism" in the last few decades. The corporations then started shifting left (because that's where the money is), and the same conservatives are now calling for regulation.
YouTube and other social media platforms are engaged in an exploitive bait and switch, which happens to collide with the speech issues. However, the core issue is one where speech is just the game ball.
Social Media platforms tell creators and their fans that they will act as "platforms" for the creator's free expression -- provided it's in line with the "Terms of Service." However, it the "Terms of Service" is a bugaboo. Social Media platforms are effectively manipulating the speech of their creators to tell their own ideological message, using censorship, the threat of demonetization, the threat of de-platforming, and the manipulative denial of viral spread -- in uncountable cases, with no reasonable connection to their terms of service. It would be one thing, if this were always actually to "preserve the value of their platform for advertisers" or some such, but the record bears out in innumerable instances that many of their actions are the grinding of an ideological axe by lower level company functionaries.
This is a bait and switch. With one hand, the platforms are saying, "We will serve as your conduit," then with the other hand, they are saying, "You will send the ideological message that we want you to say, or else!" Many will say that this is their right as a private company. No private company has the right to say they will do one thing, but then dishonestly do the opposite through murky deception and threats.
The record of illogical, nonsensical, and weird contortions "platforms" go through to hide ideological manipulation is legion. If a full accounting were entered into the record in a court of law, and if discovery to uncover collusion in such activities were carried out, I predict that it wouldn't go well for Big Tech. They know this, are afraid of this, and this explains their behavior to a T.
To this I say: prove it. You've put forth a strong argument (we'll ignore the relevance for a minute), full of fanciful accusations but it lacks detail or corroborating evidence. If you want to say these platforms are sexist, I can point you to the You Look Like a Man page on instagram, where they've cataloged it. But you seem to be claiming an agenda. Cite your credible sources.
A catalog would be quite a tall order, because the numbers are huge. A reduction of scope would be pragmatic. You could start by searching out all of the medical experts who were censored for talking about Hydroxychloroquine. Not necessarily for advocacy, but just for reporting on it. There are even instances of medical academic papers which were censored off of Twitter for linking the PDF to the paper.
You could also do a survey of people talking about censorship and deplatforming just on YouTube, particularly when complaining about YouTube's capriciousness with regards to rules, policies, and terms of service.
If you want to make a case that it's a persistent and pervasive problem, you'd do best by not starting with an exceptional case. Not only is it too recent to really make a case for it being common, but the straightforward reply will always be that it was an emergency situation during a pandemic and therefore subject to factors other than the claimed political bias.
Making a case for a pervasive bias is going to require a lot of data, because any kind of intuitive "But they're against me!" claim is going to be subject to all lot of well-known cognitive biases. All numbers involving social media are large, so simply claiming that "the numbers are huge" doesn't make for a very compelling case, either.
So if you want to get people interested in your argument, you should start gathering those numbers so you can make a more informative case than the one that leaps to the forefront of your mind.
Censorship, by a company? On their own platform? How is that possible? As far as I know, only government cannot abridge your right to speech. A platform, like a store, has the right to remove those that use their platform to promote speech counter to the owner of the platform.
Time to burn some karma to debunk some popular incorrect understandings.
Censorship, by a company? On their own platform? How is that possible?
It happens all the time.
As far as I know, only government cannot abridge your right to speech.
False. Marsh v. Alabama. A company owning a company town, owning the roads and sidewalks, can't use the power of their property rights to abridge your right of speech. Read the ruling and do the math: A private company absolutely can abridge someone's free speech!
It's mainly the case that US laws cover the circumstance of the government abridging your speech. However, it's common sense that powerful entities, like mega-corporations, can also squelch a person's speech. Sometimes this is also illegal, though not always. This is clear in US case law.
You've been lied to about this, basically, by people with an ideological axe to grind. (Probably unintentionally through wishful thinking.)
A platform, like a store, has the right to remove those that use their platform to promote speech counter to the owner of the platform.
If the platform is doing that straight-up, explicitly, then sure, they can do that. They can put political affiliation right in their Terms of Service. However, if they are accomplishing this as a bait and switch, saying they're doing one thing, then doing another, then that's clearly not right.
If people are engaged in interstate commerce while colluding between companies to do this, then maybe this should fall under RICO. I hope so, anyhow.
So then these media platforms are in collusion AND own the internet? I'm not one to throw foil ball hats around because I have plenty of my own but your security systems must be awesome.
So, in the 1980s...any dick or joe could walk off the street and demand air time on NBC nightly news?
Because, the platforms given to people today expand the voice/reach of everyday citizens unlike anything in history, but with great power comes great responsibility.
I can tweet something that could become a "meme" or reach a lot of people as an everyday citizen, it's more rare for a non-blue-checkmark account, but it's possible. I've had reddit posts with > 2k upvotes...
Show me how any of this platform/reach was accessible in the 1980's, this is progress and it's also grey murky waters. People should absolutely NOT be allowed to share stuff that is just propaganda and without facts/statistics.
I saw a funny meme the other day that said basically: Stop trying to use stats and facts to argue with "liberals", those don't work anymore they're all doctored by the left... or maybe just maybe -- there aren't facts and stats that support a world-view that's so out of touch with reality the numbers can't even be slightly fubared to appear reasonable.
Everyone thinks tech platforms should be regulated, but everyone has wildly different (and contradictory) ideas on how that should actually be done. If no one can agree on what the problem is, there isn't ever going to be a solution.
I posit that the problem is that an overwhelming volume of our national speech (and indeed international speech) is completely at the mercy of a tiny handful of corporations. Further, these corporations have demonstrated an ability to steer public opinion and credibly even unilaterally influence elections. This is simply too much power and these companies' rights to regulate their own platforms must yield to the greater need to secure our democracy.
Effectively, I think it's an antitrust problem in that the power of these corporations is so great that it threatens our national sovereignty, as the large trusts of the 20s did (although I don't think the large social media companies have flexed that power as egregiously).
>>I posit that the problem is that an overwhelming volume of our national speech (and indeed international speech) is completely at the mercy of a tiny handful of corporations. Further, these corporations have demonstrated an ability to steer public opinion and credibly even unilaterally influence elections.
How is any of this fundamentally different from how television operates? Anyone with a camera and a broadcast license can do same things (and already have). On TV, if you don't like what your seeing, the solution has been to change the channel. On the Internet, if you don't like a website, find another one.
>>This is simply too much power and these companies' rights to regulate their own platforms must yield to the greater need to secure our democracy. Effectively, I think it's an antitrust problem in that the power of these corporations is so great that it threatens our national sovereignty, as the large trusts of the 20s did (although I don't think the large social media companies have flexed that power as egregiously).
Anyone can start a forum or website for near-zero in seed money. In the 1920's, the same couldn't be said for an oil company. Facebook's privacy violations threaten personal liberty but I fail to see how you reconcile that issue with antitrust law. Making them utilities or breaking them up doesn't solve the underlying issues of what you're trying to accomplish. In other words, it's wrong tool for the wrong job.
> How is any of this fundamentally different from how television operates? Anyone with a camera and a broadcast license can do same things (and already have). On TV, if you don't like what your seeing, the solution has been to change the channel. On the Internet, if you don't like a website, find another one.
My grievance wasn't "I don't like what I'm seeing" for any value of "I", it was that social media companies have too much influence by virtue of the volume of speech subject to their control. All television companies combined don't control as much speech as Twitter alone (1 broadcaster x millions of viewers versus millions of broadcasters x millions of viewers).
> Anyone can start a forum or website for near-zero in seed money. In the 1920's, the same couldn't be said for an oil company.
Obviously this is a falsehood. You could easily start an oil company in the 1920s, but it wasn't going to compete with the giants of the time, just like a website that I might start tomorrow isn't going to compete with Facebook or Twitter. But I'm not very interested in whether the cabal of social media giants amassed their influence legitimately versus the anticompetitive practices of the 19th century trusts; I'm more interested in the fact that the problem is the same: too much concentrated influence threatens democracy. If you oppose the Citizens United ruling, you almost certainly agree with me, at least in principle (you might disagree about whether social media giants have amassed so much influence as to threaten democracy, and that's fine).
> Making them utilities or breaking them up doesn't solve the underlying issues of what you're trying to accomplish
How would you know? Per your first paragraph, you think I'm trying to solve for "I don't like some website". :) I'm solving for "they have too much influence" which is what antitrust legislation is about (you might take some narrower perspective that it's exclusively about preventing certain kinds of business practices
The telecommunications act of 1996 basically allowed for media conglomerates and non-media companies to buy up and basically propagandize media orgs. Like for instance: Comcast buying NBC. It took us from > 500 nationwide media orgs to < 6 that control 95% of the media on the web or tv.
That's all on the "free market" and "unregulated capitalism" the right "loves" so much.
Maybe the answer is end that bill, split up all the media, and maybe have a max # of "users" per social media company so that they need to basically "spin" out other companies for the next 10 million users, and each block of 10 million users have their own group so that no groups have max control... seems like the equivalent of 500 media orgs with different agendas at least they have some differing opinions.
Now Sinclair Broadcast group basically sends the exact same word-for-word script to thousands of local tv stations, and we just accept it as truth.
Twitter and Facebook are the least of our problems... media is broken in America period, and that's because of lack of anti-trust against media conglomerates as well as repealing the fairness doctrine. Not sure what a good solution is, but twitter/fb/etc are small wheels. If there's no newsweek, fox, brietbart, cnbc, nbc, msnbc, cnn to post news stories there'd be no sharing of news period on these platforms, of course then htere's just no news online...
Ideally it'd be nice if media ads were overhauled to guarantee that media doesn't get blind-sided by special interests... like Humana pulling ads if they're too supportive of medicare for all, it'd be nice also if we ended lobbying in congress and enacted anti-corruption laws at every level of government to ensure money stays out of politics and the the U.S. government self-funded all elections and at an "equal" level. If I ran against Bernie Sanders, we'd get equal $$ and that's that, no donations etc.
At least then how far someone gets is more on merit, organization, and how well they can budget -- things that are admirable in an elected official. Ability to fundraise has no bearing on how you can govern because you don't hold fundraisers for social projects, the military, etc - you tax the people or print new cash.
Let’s talk about this “demonstrated ability to steer public opinion and unilaterally steer elections”. I believe you have fallen into an intentional conflation of ideas here. Social media as private platform with terms of use that other people promote on, versus the social media company acting with their own political agenda. Those thongs are not the same.
We’ve seen many disinformation campaigns proliferate on FB, Reddit, 4chan, Twitter, et al, but with the notable exception of 4chan, these campaigns are promoted the company’s executives, but rather amplified through algorithms developed to promote time on site. Zuck only cares about money. He doesn’t care if it’s coming from cooking tips, flat earth memes, or a 6 hour expose videos about how satanic pedophile aliens are eating children in the basement of the Alamo. It is the users posting the misinformation that are attempting to manipulate elections by utilizing a third party’s engagement promoting algorithms and platform. This distinction is very important, but gets intentionally muddied. FB isn’t manipulating the election, as much as Simone trying to manipulate an election are being stymied.
Furthermore, when we actually examine the claims of “censorship” they end up falling apart pretty quickly. It will be a clear cut TOS violation, by a person that unfailingly has had a long history of getting passes for TOS violations. Which makes me wonder, why were they getting so many extra chances. (Case in point: Alex Jones and YouTube.)
You also have the same people screaming about how they’re being censored, regularly appearing as the most shared pages on FB.[0] When you dig into even more, there’s nothing there.[1]
So what is going on here? It’s a replay of the 70s and 80s. Newspapers are dead, and so the “liberal media elite” canard (and we know it is a canard, because Bill Kristol and Rich Bond have admitted it.[3]) needs a new target. And just like then, if it doesn’t have a particular political bent, its “unfair”. And you know what? The whining is working again.[2]
What exactly did Kristol say? They article says he "is on record as saying that the 'liberal media' canard is often used by conservatives as an excuse to cover up for conservative failures", but doesn't include a quote or citation. (Putting "liberal media" in quotes doesn't count as a citation, since he uses the term all the time.)
I have zero doubt that this is a behavior they've used, but it's a little weak to say that he's "on record" without pointing to the record.
I’ll give you three direct quotes and a link to all three.
Bill Kristol: "I admit it. The liberal media were never that powerful, and the whole thing was often used as an excuse by conservatives for conservative failures."
Rich Bond, chair of the Republican Party in 1992, during the 1992 presidential election: "There is some strategy to it [bashing the 'liberal' media]. If you watch any great coach, what they try to do is 'work the refs.' Maybe the ref will cut you a little slack on the next one."
Pat Buchanan during his 1996 presidential campaign: "I've gotten balanced coverage, and broad coverage-all we could have asked. For heaven sakes, we kid about the 'liberal media,' but every Republican on earth does that."
>I posit that the problem is that an overwhelming volume of our national speech (and indeed international speech) is completely at the mercy of a tiny handful of corporations. Further, these corporations have demonstrated an ability to steer public opinion and credibly even unilaterally influence elections
This is the Democratic party and Mainstream Media line. I vehemently disagree with that. So OP was correct. We can't even agree what the problem is. One half of the population blames Russia, Twitter, and 'misinformation' for their nominee's election loss and now wants blood and openly advocates for censorship of their political opponents.
Ironically the facts are essentially the exact opposite. It's Thomas who is mad about his candidate's loss and feels it's Facebook's fault for suppressing "conservative speech", and argues for common-carrier regulation so that it doesn't "happen again". Democrats and the mainstream media have nothing to do with this other than being things that you don't like that virtue signal what side you're on.
On the other hand if you think Russia isn't doing their best on social media, and that they're overwhelmingly pushing for Trump... you're willfully ignoring the facts, and the conclusions of multiple law-enforcement and intelligence studies. Of course this is only a temporary thing - Russia pushes for whatever will produce the most chaos both domestic and also in our international alliances and on the world stage. Right now that is Trump and that faction of his party, but they use the green party similarly, and hypothetically if the Republican party fractures and the democrats become ascendant on the US political stage then you will see them push unqualified green-party wacko candidates to do similar kinds of damage. The healing-crystals lady (Marianne something?) already has some suspicious stuff going on iirc.
Also, there is of course a point to be made here about the subtle acknowledgement that a large amount of "conservative speech" these days is in fact hate speech in political clothing, targeting immigrants, minorities, LGBT, etc, or violent extremism such as the capitol insurrection, advocating the overthrow of democratic governors (yet another incident last week from the leader of the Michigan GOP), and so on. That is what Facebook is taking down, and that is what Thomas and others choose to identify as "conservative speech".
> This is the Democratic party and Mainstream Media line
It was in 2016, but after 2020-11 it's become a prominent conservative position as well. People disagree with respect to their specific concerns about how the social media giants might use their influence, and that's actively a good thing because it means that our solution space must be nonpartisan--we're looking to reduce the power of social media giants irrespective of the giants' political alignment.
Let’s be honest here, the two aren’t complaining about the same thing. One side is complaining about lax enforcement of TOS violations, while the other is trying to claim that a private actor enforcing rules on their private property is violating their rights. One is pointing out preferential treatment, and the other is complaining that they aren’t getting enough preferential treatment.
I think both sides want more aggressive enforcement of TOS violations, or at least I see a lot of lefties advocating for more aggressive enforcement while righties are advocating for consistent enforcement. In whatever case, I'm not really interested in ToS concerns for the purposes of this thread.
> the other is trying to claim that a private actor enforcing rules on their private property is violating their rights
I sympathize with this argument insofar as "their private property" has become the de facto public square, and it's absurd that one company should have the ability to evict someone from the public square. But I'm not actually that interested in the power dynamics between the company and any given individual, but rather about the implications associated with one or several companies having such control over the public square--they can unilaterally influence public opinion and steer democracy. Too much concentrated influence.
And if you're one of the folks who were complaining that Russia was able to manipulate Twitter's algorithms to decide the 2016 POTUS election, you necessarily agree that Twitter's algorithms have the potential to decide elections and Twitter pretty clearly has direct control over its algorithms, thus you necessarily believe that Twitter has the potential to unilaterally decide POTUS elections.
You "vehemently disagree" with the "Democratic Party line" that "speech is at the mercy of a handful of mega corporations with the ability to steer public opinion"?
That being the case...
I assume you must feel that speech is not threatened by the likes of Facebook and Twitter and thus you would certainly agree there would be no reason to regulate them. They should be free to exercise their corporate free speech rights to censure content for any reason or no reason at all. Right?
It is concerning that Google, Facebook, et al can, in a totally opaque manner, change what billions of people will discover on the internet.
I don't know the legal niceties here. Regulating these companies as utilities may be a stretch but perhaps Congress could force them to be more transparent about how topics/pages/etc are ranked and particularly about specific decisions to boost/de-boost certain topics/pages/etc (assuming those decisions are being made).
We're only in this situation because anti-trust has failed, and these giant platforms gobble up all the upstarts & engage in anti-competitive practice.
Trying to regulate a market you've let already grow deeply unhealthy is a bullshit answer to the real problem.
Generally the platforms have been too permissive about what they've allowed, acting only when absolutely necessary. They have been too generous, and allowed too much disinformation & lies[1]. It is not great that they use opaque unclear processes, but that's something competition should address, not regulation.
You can be assured that whatever order & regulation Clarence Thomas wants, it is not to fix the lying & abuse. It is to require it. He is enormously sore his political buddies have gotten their wings clipped a little bit for their outrageous behaviors. That's all this is about, to Clarence Thomas.
> There is a fair argument that some digital platforms are sufficiently akin to common carriers or places of accommodation to be regulated in this manner
Ostensibly Thomas means that these digital platforms may be analogous to "common carriers" that transport goods or people for any person or company and are responsible for any possible loss of the goods during transport; or he means "places of accommodation" as noted in Title II of the Civil Rights Act of 1964, which prohibits discrimination in places of public accommodation.
Neither seem particularly similar to any of the FAANG.
As noted elsewhere in this thread, ISPs seems much closer to common carriers than any of FAANG.
This is a strangely emotive suggestion (not sure why, most people don't care about regulatory frameworks or their utilities). It's probably better to rephrase this without the metaphor. For instance, you could say "I want to pass a law saying Facebook cannot block legal content nor use an algorithm to hide it" (David French refers to this as the "Pornography on Facebook" plan, something like that).
And utilities regulation in USA is a great example of huge failure for customers. Extremely outdated power lines, that keep on lighting California on fire, water that’s killing people in cities like Flynt, blackouts like in 3rd world country, etc, etc.
Tech needs changes, but regulating them as utilities it’s best way to ensure low quality outcomes for customers.
This seems to further confirm a major shift that is happening in the American Right. They've been moving pretty swiftly to the "left" on economics, with Thomas's statements following the protectionism that we saw during the Trump years.
The American Right of prior decades, by contrast, was often against what the federal government of the time was doing to Microsoft (the Big Tech bogeyman of the time). And does anyone remember how the American Right stood against the Fairness Doctrine?
I would like to see more regulations that apply only to companies over a certain size.
I'm a big proponent of capitalism but competition does seem to break down once companies reach a certain size. Just too easy for the largest players to buy out competitors with life-changing amounts of money, steal customers by offering huge but temporary incentives, or push through regulatory capture.
I think an ideal government would rely more on the private sector but would ensure a level playing field and foster healthy competition between companies.
Take for example Comcast buying up all the local ISPs, Disney buying up every media company, and FB acquiring Instagram to eliminate potential competition. These were all very successful acquisitions and the right business decision.
However I'm not convinced they were in the public interest.
Are they making customer experience worse? Sure, Comcast sucks, but when I was dealing with local small isp, I had much much worse experience. Or with fragmentation of streaming, does having more and more content in one streaming service makes my experience worse? Would Instagram be better if they didn’t have power of FB behind them, for things like abuse, spam fighting or infinite deep pockets?
It’s very complex topic, just “big=bad” doesn’t capture it.
I had great experiences with the local small ISP. Faster connection, no need for expensive installers to come try to upsell on a modem.
With Instagram and FB, I wonder what FB would have done to compete with Instagram, they would have had to compete on features or privacy, it would be interesting to see what they came up with, same if they had to compete with rather than just purchase Oculus.
Wait, I forget, are conservatives pro regulation or pro deregulation of private business now? It's hard to track. If we regulate how twitter operates z do we then also regulate how Hobby Lobby discriminates against their employees' right to choose? Finding consistency in the right if becoming very difficult.
I voted red for many years. I also own guns. I say this to preempt the socialist/leftist replies. I'm just not stupid that I can't see the hypocrisy.
I hate falling into "slippery slope" arguments, but I'm gonna be "that guy".
If we treat the tech platforms like utilities, then would that force companies like YouTube, Facebook, and Twitter to allow full-on porn on their platform? How about literal neo-Nazi stuff advocating death to millions of people? Short of deleting and reporting something outright illegal (e.g. child porn), would we be forcing these companies to take a fully "hands off" approach to all their content? I just worry that if there's no content moderation, basically every platform devolves into awfulness like Gab, Parler, 8kun, or Bitchute.
I'm not wholly opposed to treating big tech companies like a utility but I feel I gotta ask the annoying questions. I would honestly love to be wrong about this but it does seem like the worst parts of humanity are the most vocal when there's no moderation at all.
> If we treat the tech platforms like utilities, then would that force companies like YouTube, Facebook, and Twitter to allow full-on porn on their platform?
Does USPS allow you to ship anything, legal or not, without question? (No)
Do cable companies require that all content be accessed equally, including adult only content? (No)
Will your electric company kick you off your service if you’re running a datacenter in a giant warehouse next to your residence? (Probably, unless you have a non-residential contract with them)
> Does USPS allow you to ship anything, legal or not, without question? (No)
Sure, let's pull this thread.
Your use of the term "legal or not" is silly, because I specifically put the caveat for the platforms to delete illegal content, so I'll only look at the scope of legal stuff. You are correct that the USPS won't let you ship anything, things like guns and large quantities of cash aren't inherently illegal but aren't allowed to be shipped.
The implication of what you're saying though, is that you want the government to determine what constitutes "free speech" on the internet. If someone uploads a full-length (legally licensed) Brazzers video to YouTube, that might be classified as "porn" and be deleted. Fine. How about if, in the middle of the "action" of the porn, they go on a long detailed breakdown of wealth inequality in America. Is this still "just porn" or is that considered protected political speech that now cannot be deleted?
> Is this still "just porn" or is that considered protected political speech that now cannot be deleted?
I'm pretty sure if you yell "Fire!" followed by a treatise on your political views, your speech is still not considered 'protected'. Similarly, I would imagine that if something has pornographic content in it at all, it would be regulated as porn.
But who gets to decide what "porn" is? If I upload a video talking about my opinions and I'm not wearing a shirt, is that porn? By which community's standard do we say something is "pornographic"? There are certain groups in America that say women showing their ankles is arousing, do we go with their standards?
EDIT, Cont'd:
For that matter, what if the act of sex is the actual message? As in, two people are having sex and they are doing that as a direct message towards some political entity?
Right, that's a bigger problem that we have to deal with in society anyway, so I don't think this introduces anything new. We just take the existing legal definition of pornography and apply it to social media.
I'm arguing from an assumption that social media giants are too powerful--they can steer public opinion and even influence national elections (think "Social Dilemma"). I also think they're just bad for our social fabric--they seem to drive up anxiety and antisocial behavior. This is the context for the rest of my comment.
That said, I think you're right that without moderation, the quality of these platforms will suffer. I don't think it will look like Parler et al but rather like Mastodon which is more noisy than vile. But even still, I think it will drive people from these platforms, weakening them, and I think that's a net improvement over the status quo.
I also think there's another interesting possibility which may or may not be practical for other reasons: require social media companies to implement a common protocol (e.g., the mastodon protocol; forgetting what it's called off the top of my head) such that these companies can continue to offer a moderated window into the underlying social network, but they don't own the social network. If you want, you can pack up and leave for another social media provider without leaving your friends and conversations. This weakens social media giants by breaking their monopoly and allowing competition from upstarts (including other revenue models besides ads) and it also allows these giants to moderate how they like.
Yeah, I don't know that I made it clear in my original post; I do not like having to depend on our benevolent overlords like Google and Twitter and Facebook to determine what I can handle. I think they have way too much power, I'm not opposed to breaking them up into smaller entities, and I don't think it's a dumb idea to push or require companies to use open protocols.
I like Mastodon, but part of me worries that the reason that Nazis haven't ruined it is largely because a lot of them don't know about it yet. That said, if we used a decentralized protocol like Mastodon, then theoretically individual servers could moderate their posts however they'd like, and if you don't like it you could hop over to another server.
I think we mostly agree. Specifically, we both seem to think it would be interesting and possibly ideal to require social media giants to implement an open protocol, but failing that I'm not sure whether we agree that regulating them like utilities (with the associated quality implications) is better than the status quo?
I think that a lot of awfulness that causes measurable harm [1] can easily spread through social networks and I think that YouTube or Twitter should be allowed to delete or block it if they think it's harmful. I'm fully ok with the people promoting this stuff make their own website, or move to a website that's more accommodating to their harmful beliefs.
Obviously YouTube and Twitter can be a bit overzealous with who they ban, I think there's been some overcorrection for the fairly "laissez-faire" mentality that they had towards neo Nazis from ~2014-2017, and I've seen plenty of people get banned from these platforms for bad reasons as a result.
But I feel like we're arguing two different points; I think the issue isn't that the companies are allowed to ban whomever they'd like, I think the issue is that these companies shouldn't be so big as to where a ban from theme is so devastating.
As it stands, I'll fully admit that my default response of "you're free to build your own platform" is a bit silly; building something that has the potential to distribute a message as well as Twitter or YouTube would be almost impossible for anyone without a lot of funding behind them to do. ActivityPub-esque systems have a nice potential to change this, and as stated, I'd be on-board with making these companies use open protocols so as to increase competition.
[1] fake COVID Cures, "alternative" medicine like drinking bleach or injecting yourself with ozone, HIV denialism, etc...
Not a slippery slope, it’s right there in the article:
> Depending on the specific contours of such regulation, social media sites could be forced to alter or do away with many of the moderation standards they use to keep harassment, hate speech and nudity off their platforms.
If Thomas had his way, he’d probably find a way to declare porn illegal, or subject to much stricter regulations.
I support the worst parts of humanity having the ability to speak online as much as we allow them to speak in real life. If we would regulate their public speech outdoors, I'm fine with regulating their public speech online.
Would we propose that racists be banned from speaking to each other in person?
No, of course not, and I'm not opposed to Nazis having their own websites like the Daily Stormer, or running their own Mastodon instance or something, or flocking to a "free speech" clone of Twitter like Parler, or uploading videos on Bitchute. I absolutely think that ISPs should be regulated like utilities.
Wait, so you're saying that terms of service shouldn't be allowed? What if I make a website that meant to be a "one stop shop to watch dog (and only dog) videos", and people can upload videos of puppies playing around. Are you suggesting that I shouldn't be allowed to delete the video if someone uploads a video of a cat?
I don't know if for everything, but that definitely makes sense for "general purpose" social media like Facebook that has 2.7 billion active users. That's 34% of world population. Probably includes a lot of bots, but still.
I’m saying we should change it so you have one of two choices for your website:
- let users upload cars, puppies, and funny videos of their kids (video platform; common carrier; immunized from copyright claims, etc)
- accept editorial control, and only post puppy videos, but assume liability for copyright violations (video provider; not common carrier; no immunity)
You’d be allowed to do either of those, but not the case now where you editorialize to just puppy videos but maintain immunity for that content.
As a business owner, you’d have to choose:
Are you a platform for others to host video or a provider of exclusively puppy videos?
Social media is not a single protocol or medium like the examples you give. It's a variable set of product offerings that - at this moment in time - happen to provide a similar set of features. Perhaps if you squint, these common features resemble a protocol, but they are not.
What consistent service is it you think they should be required to provide?
In what way does what you say not apply to cellphones?
I believe I’m better off for that regulation (in telecom) and believe that I would similarly benefit if social media platforms were also regulated as common carriers.
Edit — rate limited, so replying here:
The commonality is obvious:
Passing messages of various formats to users within your network of contacts and maintaining forwarding/access rules for those messages — almost exactly what cellular service has in common.
I wonder if there's even a reasonable way to delineate what should be regulated vs what should not. For traditional utilities, the line is pretty clear about who is providing me electricity. "Social media" and "cloud provider", however, are extremely vague terms. There are a few standout examples of what might count (FB, Twitter, ISPs), but there are innumerable examples of services that are clearly "social media" or a "cloud provider" but that clearly should not be subject to the burden of regulation. Maybe the distinction is as simple as userbase size, but that too seems like a too-crude distinction.
My solution is split services into “platforms” and “providers”, where only “platforms” enjoy immunity for user uploads but are subject to common carrier regulations in exchange.
It would be up to any company to decide if they’re a “platform” or “provider”.
Let people self-select if they view themselves as platforms or providers — with benefits and costs to each option.
Not everything should be a platform; I think companies like Twitter, FB, and YT would be happy to adopt platform/common carrier status if their immunity would otherwise be revoked.
Are there still edge cases? Yep.
But let’s not let perfect be the enemy of good — this change would improve the regulatory landscape in a way that helps end users.
Even the telephone company could decline to service customers, and often did - or if it did service them, put their use of the service under heightened scrutiny.
By extension it can, by declining to service some customers.
I think internet access is a great case for common carrier, there the service exists to put two parties in direct communication with one another. I dont know that it applies well to a website, which conceptually looks more like a community newspaper, with little editorial functionality.
We have to remember that not everyone agrees on the problem that needs to be solved, much less how to solve it.
On the surface, it may make sense to adopt a Slashdot-style moderation model, where moderation is delegated to user space and only overtly-libelous or illegal content is censored at the admin level. If moderation is derived solely by consensus of user opinion, and if each individual user gets to decide how much moderation if any is applied to the posts they see, that would seem to avoid the original problem of Trump whining because Twitter banned him for expressing his views.
However, if you ask Twitter, they'll say they banned him not because of his opinions, but because he was using their platform to illegally incite violence and undermine the democratic process.
That gets to the real heart of the problem: conservatives who believe that it's OK when they do it, whatever "it" is, and who behave accordingly. No amount of tinkering with the moderation approach will solve the problem if a subset of politicians in power believe that platforms should be forced to carry speech in their favor regardless of concerns about legality. The argument regarding moderation is a red herring.
Probably not - but there's a pretty strong correlation between people who think social media should be federally regulated and people who feel disproportionately moderated (fair or not) on the current web, and who believe they would be happier with zero value/quality/civility-based moderation (i.e. only legality moderation, as you suggested).
HN wouldn’t be allowed to remove that content — but that’s different than flagging it as inappropriate (as HN does now).
My vision is having HNs flagging system where the content is hidden but available by default would be acceptable — but FB or Reddit style of actually removing the comment would not be.
My long term goal in forcing content to be available is to provide more control to users — such as user side content filtering on places like HN.
I’m not against filtering what users see, by the users themselves: I’m against platform using their privileged position to do that filtering, in a way that enforces their political vision.
It would never need to apply to HN. The site would never come close to meeting the definition of market dominance that Facebook, Google and other big tech platforms do that is drawing the regulated utility comparison.
The Federal Government is also not going to generally worry about applying anti-trust to the only pizza parlor in a small town or worry about forcing it to operate under competition and behavior agreements designed for giant monopolies.
Properly we don't apply all competition rules and restraints on all companies/entities in the same way, as all companies are not equally powerful or capable of controlling commerce.
> The Federal Government is also not going to generally worry about applying anti-trust to the only pizza parlor in a small town or worry about forcing it to operate under competition and behavior agreements designed for giant monopolies.
Reality suggests otherwise.
The biggest companies have the most money to throw against enforcement and lobbying to protect themselves.
It's why the IRS pretty much stopped auditing the big fish and only goes after the little fish now.
I think you're right. Also, some of the market response is already there (although still very small compared to the big players). A good example is the brave browser which could change a lot of things once a lot of people start using it.
There is often a gulf dividing capitalists and free market advocates. Plenty of capitalists are more than willing to embrace regulation if it protects their interests. The likes of Facebook and Twitter may jump at the chance to cement themselves as monopolies via some kind of nationalizing/utility regulation.
At this stage, both major parties in the US seem to be moving in this direction. The Democrats are no surprise - what is a major surprise is how quickly the Republicans and people like Clarence Thomas are moving to agree. But getting these two sides to agree on the details of the implementation will be no small feat. So while I think some regulation is almost inevitable, I can't predict what it will look like.