Telia had problems hitting many service providers. Not a Cloudflare problem per se. Plenty of non-Cloudflare stuff affected like Reddit, AWS, Fastly, ...
Obligatory comment that I (and others) make every single time: can we please for <insert diety name>'s sake stop centralizing everything? We are literally throwing away all the benefits of a mostly-decentralized internet for the sake convenience.
You have to realize that DDoS mitigaters are in a position to not stop attacks. They get paid more money when attacks happen; so any company whose sole purpose is mitigation, has a major conflict of interest. A small site can easily be hosted on AWS, which has their own protection which is transparent. Any other cloud provider should offer it transparently anyway.
I absolutely hate people who claim Cloudflare is their only solution for mitigation/protection, because it simply isn't true, and Cloudfare does some rather shady stuff.
I feel like saying DDoS mitigators are in a position to not stop attacks is akin to saying car insurance companies are in a position to not stop car accidents. I think the value prop is the quality of the service WHEN the attacks happen, and when they aren't happening it is effectively an insurance-like business. However if I get DDoS'd and my mitigator does nothing, one would think they would eventually be overtaken by a more competent competitor.
Your analogy is accurate, but...
If you don't have a mitigator, they have incentive to force you on one; if you are already on it, their incentive is throttling, or otherwise 'attacking' (loosely defined) your source.
With car insurance, the insurance company has incentive to mitigate their risk, (they don't want to shell out more than they need to,) charging more if you are higher risk. They don't want to take more risk than they have to. Key point, they evaluate risk on a case by case basis.
DDoS mitigators however, they already have invested in the risk by getting the hardware to handle the bandwidth. They don't care if you are attacked or not. Nothing then stops them from playing dirty.
This kind of stuff frequently happened with Minecraft servers (what feels like) ages ago. Mitigating services would go out and attack servers, and competitors to get customers to switch to them.
A good DDoS mitigation service can take the brunt of the attack and so you stay online. So, it's not exactly like car insurance companies unless insurance companies actually were able to put a steel wall in front of your car to prevent accidents.
> I feel like saying DDoS mitigators are in a position to not stop attacks is akin to saying car insurance companies are in a position to not stop car accidents.
Only if there's no overcharge when an attack happens. If there is, you are in the conflict of interest situation the GP was talking about.
Idea: let's proxy half the internet through a private, proprietary service! We can get people to give us valid SSL certificates for their sites, too, and let's fuck up Tor while we're at it. We can totally handle it, right? Oh, and we need to pay for it somehow, so let's go the venture capital approach and just pretend we won't eventually hit a growth cap and ruin our company Twitter-style when we get there.
I love a good pitchfork and torch session as much as the next rabble-rouser, but let's remember Cloudflare got popular because they _solved*_ a hard problem: how to deal with a DDOS, as a small or medium size website. Cloudflare's essentially a for-profit insurance pool for bandwidth. No individual site has enough bandwidth to handle a DDOS, nor can afford it, but pooled together, many sites can afford a service that can handle individual DDOS attacks. Even if you want to solve the problem without a profit-motive, you still end up with a solution that is going to look very similar to Cloudflare.
Exactly. You don't solve a DDoS problem by having less capacity than your attacker and most individual companies can never afford the amount of bandwidth that is at Cloudflare's disposal.
Centralization was absolutely the best answer to that problem and will be for a long time. Almost nobody but fortune 500 companies would be able to survive a DDoS otherwise.
For CDN, yes. For DDoS mitigation, no. Before they removed the figure from their website they claimed about 1/5th the DDoS mitigation capacity as Cloudflare.
That doesn't matter at all. They've still got tons of capacity.
> You don't solve a DDoS problem by having less capacity than your attacker and most individual companies can never afford the amount of bandwidth that is at Cloudflare's disposal.
You can have less capacity than Akamai (many excellent providers have less) and still serve this purpose.
"worked around" is more appropriate, and introduced huge problems with their workaround.
The correct solution is to punish ISPs that permit this behavior to continue unchecked. We need offense, not defense. Any ISP that doesn't detect and kill DDoS participants needs to be severely throttled by other ISPs. Organizations like the FCC should be tackling this and levying fines against US-based ISPs for non-compliance and lobbying for foreign policies that punish foreign ISPs.
It's really hard to know what constitutes DDOS traffic at times. Suppose a Netflix show got really popular, do you cut it off. Let's make an exception for Netflix. What if a new competitor blahflix got popular quickly, Does its traffic get blocked?
Oh wait now blahflix needs to pay $$$ to get special privileges. Shit gets hairy real quick.
Suppose DDOS happens from iot devices. One of this is an important medical device that got hacked. Do you auto shut it down and block it's traffic. What about the life critical device under same IP through NAT that is secure also getting blocked?
ISPs should remain dumb pipes. You really don't want to give comcast more power.
>It's really hard to know what constitutes DDOS traffic at times. Suppose a Netflix show got really popular, do you cut it off. Let's make an exception for Netflix. What if a new competitor blahflix got popular quickly, Does its traffic get blocked?
Well, presumably companies have arrangements with their ISPs for expected usage and such. There can be a grace period as well, when you hit up the user and say "hey, you're using a lot of bw, is all well?" You also combine this with abuse reports from the victims if a DDoS is in fact underway. I don't think it's bad for an ISP to establish trust with a customer, either, this already happens with things like DMCA requests.
>Suppose DDOS happens from iot devices. One of this is an important medical device that got hacked. Do you auto shut it down and block it's traffic. What about the life critical device under same IP through NAT that is secure also getting blocked?
Life critical devices aren't exposed to the internet. IoT users should get throttled and receive a comminication from their ISP telling them they have a malicious device on their network with advice on how to fix the problem.
"One of this is an important medical device that got hacked"
If someone puts "an important medical device" on a network directly accessible from the internet, or on the same network as other IOT crap devices, they should be banned from ever working with computers.
This is a noble ideal that will never actually fly in practice. I can protect my site against DDoS by correcting architecture issues with one small set of companies: the hosting providers that my site sits behind, and the computers and architecture that make my solution work.
You're proposing that I protect my site by rewriting the rules for internet across the entire planet and punishing every single visitor (thousands upon thousands!!) who doesn't play by some new arbitrary rules that we then have to get everyone to agree on.
No, the ISPs should not be made to correct this kind of behavior, because it will be an eternal game of cat and mouse, and we've proven that the attackers can get around said blocks quite easily. Heck, often the "attackers" are grandma and grandpa types that clicked on a bad link and didn't know any better. Instead, we're taking the right approach here: identify bad incoming traffic at the destination, and drop it before it hits the backing servers. That's a solution we can actually reasonably apply.
I don't agree with a lot of what Cloud Flare is doing, and I really wish we had more than one service like it that was as popular as they are, but they are doing good work. They're solving a huge need within the industry. I believe there should be more competition in the space, but I refuse to believe that the overall approach is inherently bad when it obviously works.
The solution can't be to hurt innocent traffic because a malicious user has some of the bandwidth. I agree some ISPs are complicit in the situation, but tit-for-tat approaches ultimately will hobble ISPs and create an irate and a distrustful internet. Even if you do create a magical technical solution that solves all the challenges without hurting bystanders, then you face an even bigger challenge: the status quo.
There is no way to get from our current situation to the world you propose - there will be never be a quorum from ISPs (or governments) on this sort of standard. It's a tragedy of the commons and no single participant has enough leverage or interest in a new status quo.
As someone who has had their company targeted by persistent DDoS / ransoms and unable to afford protection from anyone else but Cloudflare, your statement is completely ignoring the enormous value that they provide and only focusing on the negative.
Centralization comes with risks, but I think the risk is absolutely acceptable in this case until we come up with a better, decentralized approach to attacks that normal people can afford.
I have a medium sized website, for 20$ a month they take 80% of the requests and bandwidth so i don't have to pay for a dedicated server and someone to run it.
I love them very much.
If they fuck up too often i can stop using them in 5 minutes, this is just perfect.
They both have revenue, just from different sources. The comparison isn't that bad. All VC-backed approaches (and public companies) eventually hit a growth cap, when their investors are going to expect unsustainable growth.
Companies with unclear or ad-based business models, like Twitter, are more likely to end up doing sketchy things to stay in business. Of course you can find exceptions either way but I think it generally applies.
That being said, I agree that centralization sucks, and I'm thinking about symbolically moving my tiny blog off Cloudflare for this reason. The ridiculous thing is that the origin is on GitHub Pages, so I'll have to move off there as well to be coherent.
I like to think of Cloudflare like insurance. Any single website may need it rarely if ever, but if it happens to you, you have little to no recourse that doesn't involve large sums of money.
Instead, you pay Cloudflare a regular, small amount of money† to reduce the risk of having to pay a large sum of money in case you're targeted. This sounds almost exactly like insurance to me.
Some of my experience and solution to those issues
1) UptimeRobot [0] - use to monitor various client websites. The free plan checks every 5 minutes, which should be enough. Notifications can be sent to email, slack, sms and many others. If you think there may be a problem only from some locations make a fast check with [1]. If you suspect DNS issues [2] or [3].
2) Again use UptimeRobot for monitoring device publicly accessible from your network. Moreover, if you are in control of your office network, using pfSense [4] notifications when a network gateway goes down works well (still, that works only if you have 2 or more ISPs). Or use a dedicated monitoring device/service like Zabbix.
3) Using to Twitter to Slack notification, subscribe for updates from both services that you use and major services responsible for Internet backbone. An example is, that using GitLab, comes with multiple time when the service dies (even that they are improving) - seeing the message in Slack that something is WIP currently by all team members (in a dedicated channel), helps to skip unnecessary debugging [5] :)
Not affiliate with any of the service. Still - met the UptimeRobot guys some ago - they are a small startup based in Malta, are very cool and have very stable service :)
Pro-active monitoring rather than reactive diagnosis.
Zabbix is an example piece of software, probably overkill for most but I haven't used anything else in the last 5 or so years so don't have any better suggestions.
External monitoring from AWS or a colo.
Simple icmp checks and tcp connects + possible up to
app layer checks allowed from these failsafes. Obfuscate
as needed.
uhh, monitoring those things? you can monitor your internet provider a number of ways. something like smokeping allows for an exceedingly simplistic test: ping stuff on the internet.
logs will tell you if a client server or individual service has died. that are literally hundreds of solutions for these.
Check out the dip in requests to Reddit: http://www.redditstatus.com/