Right or wrong I think many have misread the legislation or read poor coverage of it given people's reasoning.
Much of things boils down to doing a risk assessment and deciding on mitigations.
Unfortunately we live in a world where if you allow users to upload and share images, with zero checks, you are disturbingly likely to end up hosting CSAM.
Ofcom have guides, risk assessment tools and more, if you think any of this is relevant to you that's a good place to start.
The new rules cover any kind of illegal content that can appear online, but the Act includes a list of specific offences that you should consider. These are:
terrorism
child sexual exploitation and abuse (CSEA) offences, including
grooming
image-based child sexual abuse material (CSAM)
CSAM URLs
hate
harassment, stalking, threats and abuse
controlling or coercive behaviour
intimate image abuse
extreme pornography
sexual exploitation of adults
human trafficking
unlawful immigration
fraud and financial offences
proceeds of crime
drugs and psychoactive substances
firearms, knives and other weapons
encouraging or assisting suicide
foreign interference
animal cruelty
> Something is a hate incident if the victim or anyone else think it was motivated by hostility or prejudice based on: disability, race, religion, gender identity or sexual orientation.
This probably worries platforms that need to moderate content. Sure, perhaps 80% of the cases are clear cut, but it’s the 20% that get missed and turn into criminal liability that would be the most concerning. Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
Further, prejudices in terms of language do change often. As bad actors get censored based on certain language, they will evolve to use other words/phrases to mean the same thing. The government is far more likely to be aware of these (and be able to prosecute them) than some random forum owner.
It's important to understand that the act we're talking about does not make owners simply liable for stuff that happens on their sites, nor does it require them to stop everything. It's about what the risks are of these things happening, and what you do about that.
In fact, if you have had a place that people can report abuse and it's just not really happening much then you can say you're low risk for that. That's in some of the examples.
> Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
Just want to add that I couldn't find any references to gender identity in the linked Wikipedia article as well as the article on hate incidents in the UK.
which is an umbrella term for everything that the government does not like right now, and does not mind jailing you for. In other words, it's their way to kill the freedom of expression.
From that list I don't see HN being affected, although I read somewhere that a report button on user generated content was required to comply for smaller sites.
I think it's hard to make the case for anything other than a pretty tiny group or organisation that that you can get away without having some reporting and moderation process.
I don't think you need a report button but a known way of reporting things by your users is likely going to be required if you have a load of user generated stuff that's not moderated by default.
I might be falling for what I've read second-hand but isn't one of the issues that it doesn't matter where the forum is based, if you've got significant UK users it can apply to your forum hosted wherever. You've got to block UK users.
A forum that merely has good moderation is not automatically compliant with the act. It requires not just doing things, but paperwork that shows that you are doing things. The effort to do this well enough to be sure you will be in compliance is far beyond what is reasonable to ask of hobbyists.
On my single-user Fedi server, the only person who can directly upload and share images is me. But because my profile is public, it's entirely possible that someone I'm following posts something objectionable (either intentionally or via exploitation) and it would be visible via my server (albeit fetched from the remote site.) Does that come under "moderation"? Ofcom haven't been clear. And if someone can post pornography, your site needs age verification. Does my single-user Fedi instance now need age verification because a random child might look at my profile and see a remotely-hosted pornographic image that someone (not on my instance) has posted? Ofcom, again, have not been clear.
It's a crapshoot with high stakes and only one side knows the rules.
> Then you don't have a user to user service you're running, right?
"The Act’s duties apply to search services and services that allow users to post content online or to interact with each other."[0]
My instance does allow users (me) to post content online and, technically, depending on how you define "user", it does allow me to interact with other "users". Problem is that the act and Ofcom haven't clearly defined what "other users of that service" means - a bare reading would interpret it as "users who have accounts/whatever on the same system", yes, and that's what I'm going with but it's a risk if they then say "actually, it means anyone who can interact with your content from other systems"[2] (although I believe they do have a carve out for news sites, etc., re: "people can only interact with content posted by the service" which may also cover a small single-user Fedi instance. But who knows? I certainly can't afford a lawyer or solicitor to give me guidance for each of my servers that could fall under OSA - that's into double digits right now.)
[2] "To be considered a user of a user-to-user service for a month, a person doesn’t need to post anything. Just viewing content on a user-to-user service is enough to count as using that service." from https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
You're right. Plus, the overreactions have been walked back or solved in some cases, e.g: LFGSS is going to continue on as a community ran effort which will comply with the risk assessment requirements. Most of the shutdowns are on long-dead forums that have been in need of an excuse to shutter. The number of active users impacted by these shutdowns probably doesn't break 100.
Much of things boils down to doing a risk assessment and deciding on mitigations.
Unfortunately we live in a world where if you allow users to upload and share images, with zero checks, you are disturbingly likely to end up hosting CSAM.
Ofcom have guides, risk assessment tools and more, if you think any of this is relevant to you that's a good place to start.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...