Liability is unlimited and there's no provision in law for being a single person or small group of volunteers. You'll be held to the same standards as a behemoth with full time lawyers (the stated target of the law but the least likely to be affected by it)
> the stated target of the law but the least likely to be affected by it
The least likely to be negatively affected. This will absolutely be good for them in that it just adds another item to the list of things that prevents new entrants from competing with them.
> The entire law is weaponised unintented consequences.
That would assume no malice from the goverment? Isn't the default assumption that every government want to exert control over its population at this stage, even in "democracies"? There's nothing unintended here.
> 1.17 A U2U service is exempt if the only way users can communicate on it is by posting comments or reviews on the service provider’s own content (as distinct from another user’s content).
A blog is only exempt if users communicate to the blogpost author, on the topic of the blogpost. If they comment on each other, or go off-topic, then the blog is not exempt.
That's why that exemption is basically useless. Anyone can write "hey commenter number 3 i agree commenter number 1's behaviour is shocking" and your exemption is out the window.
Yeah I see what you mean, that does seem oddly useless. And thanks for finding the correct section.
I'd like to say we could trust the implementation and enforcement of this law to make sense and follow the spirit of existing blog comment sections rather than the letter of a law that could be twisted against almost anyone accepting comments —for most people GDPR compliance enforcement has been a light touch, with warnings rather than immediate fines— but that's not really how laws should work.
There has been new information since that blog post which has reaffirmed the "this is much ado about nothing" takes because Ofcom have said that they do not want to be a burden on smaller sites.
"We’ve heard concerns from some smaller services that the new rules will be too burdensome for them. Some of them believe they don’t have the resources to dedicate to assessing risk on their platforms, and to making sure they have measures in place to help them comply with the rules. As a result, some smaller services feel they might need to shut down completely.
So, we wanted to reassure those smaller services that this is unlikely to be the case."
Nothing more reassuring than a vague “we’re unlikely to go after you [if you stay on our good side.]”
It’s clear the UK wants big monopolistic tech platforms to fully dominate their local market so they only have a few throats to choke when trying to control the narrative…just like “the good old days” of centralized media.
I wouldn’t stand in the way of authoritarians if you value your freedom (or the ability to have a bank account).
The risk just isn't worth it. You write a blog post that rubs someone power-adjacent the wrong way and suddenly you're getting the classic "...nice little blog you have there...would be a shame to find something that could be interpreted as violating 1 of our 17 problem areas..."
Changing the code of practice is a years long statutory consultation process, they're not going to be able to change the rules to go after you on a whim.
If you can’t find something to accuse from the SEVENTEEN “areas of focus” they’ve already designated in the law then I would argue you lack the creativity and thirst for power needed to climb the ranks as a politician.
> So, we wanted to reassure those smaller services that this is unlikely to be the case
This is the flimsiest paper thin reassurance. They've built a gun with which they can destroy the lives of individuals hosting user generated content, but they've said they're unlikely to use it.
You can try the digital toolkit and see for yourself if this is a realistic pathway for a small site (such as a blog with a comment function). Personally, I find it puzzling that Ofcom thinks what they provide is helpful to small sites. Furthermore, they make it pretty clear that they see no reason for a purely size-based exemption (“we also know that harm can exist on the smallest as well as the largest services”). They do not explore ways to reach their goals without ongoing collaboration from small site owners, either.
Unless Ofcom actively say "we will NOT enforce the Online Safety Act against small blogs", the chilling effect is still there. Ofcom need to own this. Either they enforce the bad law, or loudly reject their masters' bidding. None of this "oh i don't want to but i've had to prosecute this crippled blind orphan support forum because one of them insulted islam but ny hands are tied..."
The Canadian government did the same thing when they accidentally outlawed certain shotguns by restricting bore diameter without specifying it was for rifles.
A minister tweeted that it didn’t apply to shotguns, as if that’s legally binding as opposed to you know, the law as written.
The democrats wrote a bill to hire 60k new armed IRS agents and promised they wouldn't be used to go after anyone with an income less than 250k. Senator Mike Crapo tried to add an ammendment to put that in the bill but they blocked it. We have a serious problem with politicians lying about the text of bills.
While I certainly would prefer that the IRS first and foremost go after tax evasion perpetuated by the wealthy (if for no other reason than there's likely more bank for the buck there), tax law is tax law. If someone making less than $250k/yr is evading paying taxes, the IRS should go after them just the same as if it was someone making $5M/yr.
Usually people complain that the IRS doesn't go after >250k. I've never heard anyone argue that they don't go after <240k enough. This is why the democrats promised it would only be used to go after >250k.
The problem is the dishonesty, saying the intent is one thing but being unwilling to codify the stated intent.
in order for going after everyone (or whatever arbitrary number we choose) it needs to be economically feasible. it is simple math and should be explained in simple math terms. it cost on average X amount to “go after someone” - if that amount exceeds what potential benefit is based of course on earning then we do it. otherwise it makes no sense. except we make this a political issue (as everything else). any sane person running IRS would do the math and figure out what the number is where it makes sense to go after someone
Getting one persons taxes is not why you enforce, any more than why the police enforce other kinds of laws that don't have anything to do with bill collecting.
Whatever someone who's job and education is to develop such policies says. It's not my profession and it doesn't have to be.
The point is simply that even merely picking 1% or 0.1% of people completely at random to audit keeps 99% of normal people in line, which is far more valuable to society (not just in immediate dollars) than the cost of those few actual audits, regardless what those audits "earn" in collecting a few, or zero, or indeed negative dollars that might have gone uncollected from a random individual. There is no reason an audit should not show that there was an error and the government owes the taxayer, let alone collecting nothing or collecting less than the cost of the audit.
The police's job is not to recover ypur stolen lawnmower, it's to maintain order in general. They expend many thousands of dollars in resources to track down a lawnmower theif not to recover your $400 possession, but to inhibit the activity of theft in general.
Tax audits are, or should be imo, like that.
The actual details of what should be written in the IRS manual are this: Something.
It's a meaningless question since we're not at that level. I'm only talking about the fallacy of treating tax audits as nothing more than a direct and immediate source of income instead of a means to maintain order and a far greater but indirect source of income.
> The police's job is not to recover ypur stolen lawnmower, it's to maintain order in general.
But here's the thing: it's often the case that the theft rate in an area is down to a handful a prolific thieves... who act with impunity because they reckon that any one act of theft won't be followed up.
I'd hope that in most jurisdictions, police keep track of who the prolific thieves/shoplifters/burglars/muggers are, and are also willing to look into individual thefts, etc., because even when it's the thief's first crime, there can often be an organised crime link - the newbie thief's drug dealer has asked them to do a "favour" to clear a debt, or such.
So it can be really useful to track down your lawnmower. Sometimes. And the police don't know if it's worth it or not until they do the work. I can see the parallels in this analogy to tax audits.
"Unlikely," I suppose if you don't have any significant assets to be seized and don't care about ending up in prison, you may be willing to take the chance.
Nothing reassures one as much as a goverment enforcement entity essentially saying "we have full legal right to squash you like a bug but for now we won't because we just don't want to. For now".
What standards would you want individuals or small groups to be held to? In a context where it is illegal for a company to allow hate speech or CSAM on their website, should individuals be allowed to? Or do you just mean the punishment should be less?
The obvious solution is to have law enforcement enforce the law rather than private parties. If someone posts something bad to your site, the police try to find who posted it and arrest them, and the only obligation on the website is to remove the content in response to a valid court order.
I don't have a strong view on this law – I haven't read enough into it. So I'm interested to know why you believe what you've just written. If a country is trying to, for example, make harder for CSAM to be distributed, why shouldn't the person operating the site where it's being hosted have some responsibility to make sure it can't be hosted there?
For one thing, because that person is not obliged to follow due process and will likely ban everything that even might even vaguely require them to involve a lawyer. See for example YouTube’s copyright strikes, which are much harsher on the uploader than any existing copyright law.
Your argument is that it's better to have the illegal stuff (say, CSAM) online than for a site owner to, for practical reasons, ban a lot of legal stuff too? Why?
Some sorts of goods should be prioritized over some sorts of bads. There would be no terrorism if we locked every human in a box and kept them there, yet you do not support this position, why? I jest, but I think public discourse is an unalloyed good and I would rather we not compromise informal small discourse for the sake of anti-terrorism, anti-CSAM, etc. These things won’t be fully rooted out, they’ll just go to ground. Discourse will be harmed though.
Let's consider two ways of dealing with this problem:
1) Law enforcement enforces the law. People posting CSAM are investigated by the police, who have warrants and resources and so on, so each time they post something is another chance to get caught. When they get caught they go to jail and can't harm any more children.
2) Private parties try to enforce the law. The people posting CSAM get banned, but the site has no ability to incarcerate them, so they just make a new account and do it again. Since they can keep trying and the penalty is only having to create a new account, which they don't really care about, it becomes a cat and mouse game except that even if the cat catches the mouse, the mouse just reappears under a different name with the new knowledge of how to avoid getting caught next time. Since being detected has minimal risk, they get to try lots of strategies until they learn how to evade the cat, instead of getting eaten (i.e. going to prison) the first time they get caught. So they get better at evading detection, which makes it harder for law enforcement to catch them either. Meanwhile the site is then under increasing pressure to "do something" because the problem has been made worse rather than better, so they turn up the false positives and cause more collateral damage to innocent people. But that doesn't change the dynamic, it only causes the criminals to evolve their tactics, which they can try an unlimited number of times until they learn how to evade detection again. Meanwhile as soon as they do, the site despite their best efforts is now hosting the material again. The combined costs of the heroic efforts to try and the liability from inevitably failing destroys smaller sites and causes market consolidation. The megacorps then become a choke point for other censorship, some by various governments, others by the corporations themselves. That is an evil in itself, but if you like to take it from the other side, that evil causes ordinary people chafe. So they start to develop and use anti-censorship technology. As that technology becomes more widespread with greater public support, the perpetrators of the crimes you're trying to prevent find it easier to avoid detection.
You want the police to arrest the pedos. You don't want a dystopian megacorp police state.
That is not the argument. The argument is that, with appropriate court order, a site operator must take down the illegal material (if it hasn’t already been moderated out). However, the site owner should not be liable for that content appearing on their site since it was not put there by them and since there is value in uncensored/unmoderated online communities. The person who posted the content should be liable, not the site owner. In neither case is the content just freely siting there harming the public and unable to be removed because nobody is liable for punishment.
I think an interesting alternate angle here would be to require unmoderated community admins to keep record of real identity info for participants, so if something bad shows up the person who posted it is trivially identifiable and can easily be reprimanded. This has other problems, of course, but is interesting to consider.
Individuals and small groups not held directly liable for comments on their blog unless its proven they're responsible for inculcating that environment.
"Safe harbour" - if someone threatens legal action, the host can pass on liability to the poster of the comment. They can (temporarily) hide/remove the comment until a court decides on its legality.
This is an honest question. Why does a blog need to shutdown? If they moderate every comment before it is published on the website, what's the problem? I ask because I've got a UK-based blog too. It has got comments feature. Wouldn't enabling moderation for all comments be enough?
No, you still need to do things like write an impact assessment etc and you're still on the hook for "illegal" comments where you aren't a judge and have to arbitrarily decide what might be when you have no legal expertise whatsoever.
If I'm moderating all comments before they're published on the website, what's the problem? I mean, I've got a simple tech blog. I'm not going to publish random drive-by comments. Only comments that relate to my blog are ever going to be published. Am I making sense?
Does anyone in your blog comments ever discuss circumvention of DRM?
That's a criminal offence in the UK (two year prison sentence in some circumstances). Do you have a good feeling for what might count as incitement in those circumstances?
Doesn't this act effectively create a new form of DDoS? A bad actor can sufficiently flood a platform with enough hate content that the moderation team simply cannot keep up. Even if posts default to not show, the backlog could be enough to harm a service.
And of course, it will turn into yet another game of cat and mouse, as bad actors find new creative ways to bypass automatic censors.
Should order this list by number of affected rather than alphabetical IMO. The 275K monthly user platform is almost hidden relative to the 49 and 300 user examples.
Note that the definition of "Services provided by persons providing education or childcare." is defined in law and narrow. I'm working with a charity providing online lessons for use in schools, where school children can post stuff others in their class can see. As far as I can discern they don’t fall into this exemption.
The trouble here is that the law is so crazy that third parties allowing users in the relevant jurisdiction to access the site could result in the site still be liable, so then they would have the same reason to block your proxy service if a non-trivial number of people were using it.
To do any good you don't want to cause grief for the victims of the crazy law, you want to cause grief to its perpetrators.
Which would in turn cause the whole thing to be a farce, because then the solution would be for every site to geoblock the UK and then every person in the UK to use a proxy.
The problem here is that's what's going to happen instead of what ought to happen, which is repealing the law. And then some administration some years from now is going to stop ignoring the absurdity of it and decide to start enforcing the existing law against any subset of the ~everyone arguably now violating it that they deem to be wrongthinkers.
being unlawful is a vital tool for people to keep tyranny in check, I would hope that most people are incredibly strong supporters of lawlessness when the laws are wrong. To give an extreme example, I imagine you supported the hiding of jewish people during nazi germanys reign, which means you support unlawful activity as long as it's against laws that are against the people.
If GP is not a UK citizen and does not live in the UK, how would that be unlawful? They're not beholden to or subject to UK law. The UK's belief that they can enforce this law on non-UK entities is ridiculous.
International law limits state jurisdiction to territorial boundaries (Art. 2(1) UN Charter). Hacker News is a US web site and Y Combinator LLC is a US company. The OSA, which is a UK law, cannot mandate physical enforcement (e.g., server seizures) on foreign soil. If they really didn't like HN, UK government could try to suppress HN access for their citicens by local means. If HN had a branch in the UK, the UK government could take action against that branch. As far as I know that's not the case.
Yes, but I don't really understand how the UK can expect to enforce this law against non-UK entities that don't have any employees or physical presence in the UK.
HN/YC could just tell them to go pound sand, no? (Assuming YC doesn't have any operations in the UK; I have no idea.)
I'm sure they can find a community elsewhere. Discord comes to mind... "Oh but it's illegal", trust me on this: Discord only cares if somebody actually reports the server and the violations are severe enough.
But why should they _have_ to find a community elsewhere?
It is right that a country should snuff out all communities, large and small, and drive them to hosting in another country, or "under the wing" of a behemoth with a fully-funded legal department?
It's never the stated purpose of the law, but we might do well to be concerned with what it actually does rather than what the proponents claim it would do.
Recourse doesn't matter for a sole proprietorship. If they have to engage with a lawyer whatsoever, the site is dead or blocked because they don't have the resources for that.
I’d encourage you to read the actual text of the law and not just others’ interpretation. The sole proprietorship likely falls into one of the exception clauses or is likely using a platform.
I feel like you're not understanding; people aren't going to read the law, because the law is non-trivial and they don't feel comfortable doing that themselves instead of hiring a lawyer. But they can't afford to hire a lawyer, so instead they're going to block.
What recourse? A small, 5o-member community doesn't have the resources to ensure they're in compliance, and Ofcom's statement about how smaller players are "unlikely" to be affected is not particularly reassuring.
The "stated purpose" is irrelevant. Even if they are being honest about their stated purpose (questionable), the only thing that matters is how it ends up playing out in reality.
Read the actual text of the law. It creates opportunities to appeal and other options. Another option is for these communities to band together, which seems to be happening.
Right or wrong I think many have misread the legislation or read poor coverage of it given people's reasoning.
Much of things boils down to doing a risk assessment and deciding on mitigations.
Unfortunately we live in a world where if you allow users to upload and share images, with zero checks, you are disturbingly likely to end up hosting CSAM.
Ofcom have guides, risk assessment tools and more, if you think any of this is relevant to you that's a good place to start.
The new rules cover any kind of illegal content that can appear online, but the Act includes a list of specific offences that you should consider. These are:
terrorism
child sexual exploitation and abuse (CSEA) offences, including
grooming
image-based child sexual abuse material (CSAM)
CSAM URLs
hate
harassment, stalking, threats and abuse
controlling or coercive behaviour
intimate image abuse
extreme pornography
sexual exploitation of adults
human trafficking
unlawful immigration
fraud and financial offences
proceeds of crime
drugs and psychoactive substances
firearms, knives and other weapons
encouraging or assisting suicide
foreign interference
animal cruelty
> Something is a hate incident if the victim or anyone else think it was motivated by hostility or prejudice based on: disability, race, religion, gender identity or sexual orientation.
This probably worries platforms that need to moderate content. Sure, perhaps 80% of the cases are clear cut, but it’s the 20% that get missed and turn into criminal liability that would be the most concerning. Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
Further, prejudices in terms of language do change often. As bad actors get censored based on certain language, they will evolve to use other words/phrases to mean the same thing. The government is far more likely to be aware of these (and be able to prosecute them) than some random forum owner.
It's important to understand that the act we're talking about does not make owners simply liable for stuff that happens on their sites, nor does it require them to stop everything. It's about what the risks are of these things happening, and what you do about that.
In fact, if you have had a place that people can report abuse and it's just not really happening much then you can say you're low risk for that. That's in some of the examples.
> Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
Just want to add that I couldn't find any references to gender identity in the linked Wikipedia article as well as the article on hate incidents in the UK.
which is an umbrella term for everything that the government does not like right now, and does not mind jailing you for. In other words, it's their way to kill the freedom of expression.
From that list I don't see HN being affected, although I read somewhere that a report button on user generated content was required to comply for smaller sites.
I think it's hard to make the case for anything other than a pretty tiny group or organisation that that you can get away without having some reporting and moderation process.
I don't think you need a report button but a known way of reporting things by your users is likely going to be required if you have a load of user generated stuff that's not moderated by default.
I might be falling for what I've read second-hand but isn't one of the issues that it doesn't matter where the forum is based, if you've got significant UK users it can apply to your forum hosted wherever. You've got to block UK users.
A forum that merely has good moderation is not automatically compliant with the act. It requires not just doing things, but paperwork that shows that you are doing things. The effort to do this well enough to be sure you will be in compliance is far beyond what is reasonable to ask of hobbyists.
On my single-user Fedi server, the only person who can directly upload and share images is me. But because my profile is public, it's entirely possible that someone I'm following posts something objectionable (either intentionally or via exploitation) and it would be visible via my server (albeit fetched from the remote site.) Does that come under "moderation"? Ofcom haven't been clear. And if someone can post pornography, your site needs age verification. Does my single-user Fedi instance now need age verification because a random child might look at my profile and see a remotely-hosted pornographic image that someone (not on my instance) has posted? Ofcom, again, have not been clear.
It's a crapshoot with high stakes and only one side knows the rules.
> Then you don't have a user to user service you're running, right?
"The Act’s duties apply to search services and services that allow users to post content online or to interact with each other."[0]
My instance does allow users (me) to post content online and, technically, depending on how you define "user", it does allow me to interact with other "users". Problem is that the act and Ofcom haven't clearly defined what "other users of that service" means - a bare reading would interpret it as "users who have accounts/whatever on the same system", yes, and that's what I'm going with but it's a risk if they then say "actually, it means anyone who can interact with your content from other systems"[2] (although I believe they do have a carve out for news sites, etc., re: "people can only interact with content posted by the service" which may also cover a small single-user Fedi instance. But who knows? I certainly can't afford a lawyer or solicitor to give me guidance for each of my servers that could fall under OSA - that's into double digits right now.)
[2] "To be considered a user of a user-to-user service for a month, a person doesn’t need to post anything. Just viewing content on a user-to-user service is enough to count as using that service." from https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
You're right. Plus, the overreactions have been walked back or solved in some cases, e.g: LFGSS is going to continue on as a community ran effort which will comply with the risk assessment requirements. Most of the shutdowns are on long-dead forums that have been in need of an excuse to shutter. The number of active users impacted by these shutdowns probably doesn't break 100.
The hosting platform is responsible for compliance. For Discord or Slack it's easy, but for Matrix, it might be more fuzzy. Certainly the homeserver that is hosting a room would be responsible, but would other homeservers that have users who are members of the room also be responsible?
The Chaos Engine forums - a site for game developers to discuss, moan, and celebrate fellow and former colleagues... Now moved to Discord due to this act. It really is a strange time we are living through.
The State of Utopia has published this report on the source of funding of Ofcom,
the U.K. statutory regulator responsible for enforcing the Online Safety Act:
I am part of a small specialist online technical community. We just moved it over to a Hetzner box in Germany and someone there is paying for it instead of it being hosted in the UK.
If you live in the UK and can still be linked as an operator/organizer of the site (or if it's not you, other UK residents), can't they still come after you directly? I don't know about you, but I don't think running an online community would be worth huge fines to me.
Apparently the law is dreadfully written. I was reading the lobste.rs thread and wow, it’s like they took a programming course in goto and it statements and applied it to the law…
I had the complete opposite impression from that thread. It seemed like people were politically motivation to interpret the law in a certain way, so they could act like they were being coerced.
These closures are acts of protest, essentially.
I agree with @teymour's description of the law. It is totally normal legislation.
Not only is this law terrible, there are several other laws like this that have existed for years.
People saying criticism is politically motivated (ignoring the fact that this law was drafted by the Tories and passed by Labour...so I am not exactly clear what the imagined motivation might be) ignore the fact that the UK has had this trend in law for a long time and the outcome has generally been negative (or, at best, a massive waste of resources).
Legislation has a context: if we lived in a country where police behaved sensibly, I could reasonably see how someone could believe this was sensible...that isn't reality though. Police have a maximalist interpretation of their powers (for example, non-crime hate incidents...there is no legislation governing their use, they are used regularly to "question the thinking" of people who write critical things about politicians, usually local, or the police...no appointed authority gave them this power, their usage his been questioned by ministers...they register hundreds of thousands of a year still).
Btw, if you want to know how the sausage is made: security services/police want these laws, some event happens, and then there is a coordinated campaign with the media (the favour is usually swapped for leaks later) to build up "public support" (not actual support, just the appearance of support), meetings with ministers are arranged "look at the headlines"...this Act wasn't some organic act of legislative genius, it was the outcome of a targeted media campaign from an incident that, in factual terms, is unrelated with what the Act eventually became (if this sounds implausible, remember that May gave Nissan £30m on the back of SMMT organising about a week's worth of negative headlines, remember that Johnson brought in about 4m migrants off the back of about two days of briefing against him by a six-month old lobbying group from hotels and poultry slaughterhouses...this is actually how the govt works...no-one reads papers apart from politicians).
Giving Ofcom this power, if you are familiar with their operations, is an act of literal insanity. Their budget has exploded higher (I believe near a quarter of a billion now). If you think tech companies are actually going to enforce our laws for us, you are wrong. But suggesting that Ofcom with their new legions of civil servants is supposed to the watchdog of online content...it makes no sense, it cannot be described as "totally normal" in a country other than China.
My unpopular opinion is that it is a long time coming. Smaller sites that do not have strong technical knowledge of security or IT maintenance are targets for bot nets and scammers to host on.
It's the same as arguing that Health and Safety regulation harms small businesses because they have to ensure the safety of their employees and customers. Or that it's not coducive to small businesses to have employment law.
Companies have had decades to switch to safe and secure online businesses, but the self-regulation never materialised.
Site owners are not going to be arrested willy nilly. All that is needed is to show that you have the administration in place to deal with complaints from the publis and enquiries from OFCOM. If you host illegal content, you have to have someone around to deal with it. If you host hate speech then you have to deal with the consequences. nobody has been prosecuted for saying the government is rubbish, but they will be for advocating physical harm on people in our society.
This will create digital management agencies that act as proxies to OFCOM, it may even create a cottage industry of remote working digital administrators.
These changes should be enbraced as opportunity, and fought when needed, but this isn't anything but law enforcement.
The conspiracy bunnies are hopping mad, citing some sort of tyrannical destruction of liberty, but it's not.
> This will create digital management agencies that act as proxies to OFCOM, it may even create a cottage industry of remote working digital administrators.
So bullshit jobs that do nothing productive but are there for "compliance". I think we have enough of that, thanks.
What concept allows the UK to (attempt to) enforce this against non citizens whose business or life has no ties to their country?
Plenty of small countries have odd censorship laws but have escaped similar legal hand wringing.
Feel free to put a stop to it by buying liability insurance for all of these service providers, which you may have to persuade the underwriter should be free. ;-)
> It's like local US news websites blocking European users over GDPR concerns.
I don't know if you said this sarcastically, but I have a friend in Switzerland who reads U.S. news websites via Web Archive or Archive IS exactly because of that.
Accessing some of these news sites returns CloudFlare's "not available" in your region message or similar.
It's not just the EU; I'm in a poorer region outside the EU and seeing "not available in your region" is quickly becoming the norm. Site administrators try to cut down on bot traffic (scraping, vulnerability scanners, denial of service, etc) and block whole regions they're not interested in.
Hell, we do that ourselves, but only for our own infrastructure that isn't expected to be used outside the county. Whitelisting your own country and blocking everything else cuts out >99% of scrapers and script kiddies.
No sarcasm. I totally understand why a local news website in the US would just block since its irrelevant for them any traffic from outside the country and they're have little resources. I don't judge them from blocking.
Fact is that its very unlikely they would ever face any issues about having it not blocked.
Liability is unlimited and there's no provision in law for being a single person or small group of volunteers. You'll be held to the same standards as a behemoth with full time lawyers (the stated target of the law but the least likely to be affected by it)
http://www.antipope.org/charlie/blog-static/2024/12/storm-cl...
The entire law is weaponised unintented consequences.