Is it? Those who want to DDoS will always find a way, and meanwhile users with slightly odd hardware/software are being locked out. Admittedly the latter is a minority, but one of the key tenets of the Internet and what made it so successful was interoperability. This is, in some ways, even worse than (but somewhat of a following effect of) the effective browser monopoly.
Calling it "security" when it's really about "availability" is another deceptive misdirection, because the former is something that can more easily persuade the sheeple.
I don't really like to go to extremes, but fingerprinting clients and deciding access based on such should really be regarded as a moral equivalent to racial profiling.
Profiling happens all the time for entirely legitimate reasons in the real world. Racial profiling is immoral not because it is profiling; it is immoral because it is based on immutable characteristics which have no intrinsic bearing on the purpose of the profile.
To tie this back to the topic at hand, you're complaining that a service has decided your traffic resembles known patterns from bad actors, and is asking you to go through an extra step to access the content.
Are there better options? Maybe, but it's utterly asinine to compare what cloudflare is doing to racial profiling.
This is like saying no lock will stop a thief. Cost and difficulty matter, and requiring a full browser increases the challenge for an attacker enough that some people will give up and others won’t be able to send as much traffic. That’s not perfect but speaking from experience a surprising fraction of people will give up after a naive attack fails.
It’s accurate. That’s a marketing page for a DDoS prevention service so it’s not an unbiased source, and it’s especially important to remember the distinction between traffic hitting something like an edge node and actually reaching the target and causing harm. I see attacks fairly regularly (politics) but in most cases it means I see 15M block events for “GET /“ in Cloudflare’s dashboard but no actual impact on the service because they’re dropped quickly at locations around the world or, if they faked real browsers, they got a bunch of cache hits.
In other cases, people try more sophisticated attacks (e.g. posting random terms to a search page to avoid caching) and that’s more of a problem but it’s probably like 1% of the total traffic because it’s moved out of script kiddie territory into something where you need to have more skills and people don’t generally do that without a way to make money from it. One challenge with a DDoS in that regard is that it’s not subtle so your ability to wage an attack goes away relatively quickly without constant work replacing systems which are taken offline by a remote ISP.
If we weren't pretty good at stopping DDoS attacks, every major hosting provider would be offline daily. Yet, websites being inaccessible for me is fairly uncommon.
Is it? Those who want to DDoS will always find a way, and meanwhile users with slightly odd hardware/software are being locked out. Admittedly the latter is a minority, but one of the key tenets of the Internet and what made it so successful was interoperability. This is, in some ways, even worse than (but somewhat of a following effect of) the effective browser monopoly.
Calling it "security" when it's really about "availability" is another deceptive misdirection, because the former is something that can more easily persuade the sheeple.
I don't really like to go to extremes, but fingerprinting clients and deciding access based on such should really be regarded as a moral equivalent to racial profiling.