This just isn't true. We can argue whether or not Parler was not moderating correctly - but to imply they did not have a policy against violence is not true:
6. Threats of Violence, Advocacy of Imminent Lawless Action, Threats to Dox, Bribery or Criminal Solicitation
So-called “fighting words” are not a violation of our Guidelines.2
6.1 However, reported parleys, comments, or messages sent using our service will be deemed a violation of these Guidelines if they contain:
6.1.1 a “serious expression of an intent to commit an act of unlawful violence to aparticular individual or group of individuals,” with either the intent or reckless disregard as to whether the communication will “place the victim in fear of bodily harm or death.”
6.1.2 an explicit or implicit encouragement to use violence, or to commit a lawless action,such that: (a) the Parleyer intends his or her speech to result in the use of violence orlawless action, and (b) the imminent use of violence or lawless action is the likelyresult of the parley, comment, or message
6.1.3 a threat to dox anyone for any reason.
6.1.4 an offer to give or receive money, or other goods or services, in exchange for (a) afavor from any public official, or (b) the commission of any illegal action by anyone
Since the previous comment where you said this is dead, I'll repeat my response here: Amazon presented Parler with a list of violent posts that they had reported to Parler and that Parler had refused to remove. The whole letter Amazon sent to Parler was reported out by Buzzfeed, including screenshots of the violent posts that Parler did not remove.
I don't know whether that's true or false but I also don't know why it matters. Absent a confidentiality clause AWS is free to publish whatever they want. Certainly, people routinely publish their private communications with AWS.
I may be misreading things, but it sounds to me like they published the list of things to remove and Parler removed them after notification, but they're using the publication to try to make it look like they were notified before they actually were.
But this is a complaint, so I can't just assume that any of these things are actually true and that's why I want to reserve judgement until we hear how all the evidence plays out in court.
I think we agree: the material timing information here "when did AWS alert Parler to violent material" and "when did Parler remove that violent material".
The policy means nothing if it isn't applied and a lot of what happened on Wednesday (and way worse) was planned in the open on Parler. Parler's moderation also principally consisted from what I've heard of community 'jury of peers' where a reported post was shown to ~5 other users who decided if it was a violation which is incredibly insufficient and basically guarantees the policy will not be enforced.
There's been reporting that Amazon sent them some specific... Parlers (?) (IDK what the Tweet equivalent is over there) they said violated their contract or something and they weren't taken down. It's not about the speed either it's that their moderation system was completely half-assed. Keep in mind the kind of talk and planning that lead to Wednesday has been happening over there since the election and similar "hang the traitors" talk has been going on much longer basically since the site started.
It let all of these active discussions and organisation proceed knowingly. The policy does not exist.
Edit: I love how I’m being downvoted when quite frankly there are enough screenshots out there and on /r/ParlerWatch of their shit show of moderation. You’re deluded if you think there’s a moderation policy other than ignorance by design.
Tweets from Ayatollah Khomeini talking about how Israel is a cancer[1] that needs to be violently eradicated[2] apparently don't run afoul of the "Glorification of Violence" policy. OTOH Trump's 2 tweets, which were tame in comparison, were bannable.
Banning Ayatollah Khomeini from Twitter would probably cause an international incident. He might declare a fatwa[0] against Twitter, the US, or both. If none of these events happen, there's still a risk of Islamic terrorists attacking Twitter's offices. I'd probably leave his account in place too.
What is hypocritical about AWS's decision to terminate Parler?
AWS doesn't make any claims to providing universal service and like most (if not all) service providers they can terminate service for a variety of reasons.
I have no inside information but it was probably a combination of some of the following factors.
1. Compliance with the AWS acceptable usage policy. AWS says that Parler didn't comply with the AUP. All the facts won't come out unless this lawsuit goes to trial and that seems unlikely. Parler's CEO doesn't seem to think moderation is important so it's easy to believe that they didn't put a lot of effort in it. It's also possible that they can't afford to comply with the AUP but that isn't Amazon's problem.
2. Cancel culture. It was only a matter of time before AWS started getting bad PR for having Parler as a client. Petitions for dropping them were already circulating internally and externally. It's easy to say that you shouldn't drop clients in response to pressure but there is a point at which doing so is a rational decision.
3. Liability. Parler was allegedly used for planning illegal activities. If another significant attack occurs then a class-action lawsuit will pursue both Parler and AWS for contributing to the attack - this happened with the September 11th attack. The costs of defending or settling this lawsuit could easily run into the millions.
If the complaint in the lawsuit is that the policy on websites with unmoderated violent content isn't applied equally, is it really "whataboutism" to point out that the policy isn't applied equally?
Hypothetically, how would you be able to prove that point without providing examples?
Edit: also fuck everyone related to Parler. Read in detail: https://www.buzzfeednews.com/article/johnpaczkowski/amazon-p...