To everyone jumping to conclusions, remember that the words "server" and "configuration" can mean a whole host of things. It doesn't necessarily mean they mistyped their nginx config.
Exactly. Doing an upgrade to an internal email service is a configuration change. Scaling down a cluster is a configuration change. Mitigating a DDoS attack by implementing a firewall rule is a configuration change.
“Configuration” in this context is the high level system configuration, and can mean pretty much anything that falls under that.
Largely impacted by this outage was how it affected those who use Facebook Login as a convenient OAuth option. Good thing for developers to remember if someone asks them to avoid a native login option.
.. but, but, didn't they wring the DevOps folks through coding challenges, sorting algos and whiteboard coding before hiring them? I heard that's the number 1 way to ensure uptime at FAANG.
(Configuration changes, that's the source of my sarcasm)
WAT. A server configuration change? What kind of server configuration can affect presumably thousands of machines replicated across the globe? I'm trying to understand this.
Most failures of this type end up being a cascading resource exhaustion problem propagated by an un- or mis-analyzed feedback or dependency path. It is frankly amazing it doesn't happen more often.
I'm excluding the other common type of long outage, the head-desking "failover didn't work, backups are horked, it'll take 10's of hours to restore/cold start" kind.
An organization Facebook's size isn't gonna be applying configuration changes to one server at a time over SSH, either. A server configuration can easily affect thousands of machines across the globe if it's deployed to them all.
Shy did I take so long for Facebook to release the cause of the outage? If they are applying configuration changes at a large level shouldnt it be fairly easy for them to figure out what was the cause?
That's silly. Error rates show as elevated on https://developers.facebook.com/status/dashboard/ until 11pm Pacific yesterday. The @facebook Twitter account sent out a statement basically within an hour of the start of the next business day.
Possibly because it doesn't matter to us really. The postmortem will be interesting to read if they publish it, but otherwise - it stopped working. Time to explain it to the peanut gallery is better spent dealing with the actual issue.
The side-effects of such a thing might not be as easily reversible.
I've had to sit around waiting a couple hours for a Percona database cluster to re-sync after a major networking whoops, and it only had a few hundred gigabytes of data.
This is fucking bananas. For nearly a decade, Facebook has been at the forefront of innovating how code is deployed at global scale. They presumably have gradual rollouts, automated rollbacks, anomaly detection, not to mention (I assume) loads of organizational safeguards in place to ensure this sort of thing never happens.
Something else happened. This was not a configuration issue. Edit: If it was, I'd expect a post-mortem post-haste.
Google also has all that and now and then their network explodes anyway when they do configuration changes. :)
Certain configurations at a big enough scale are dangerous, just because you could hit a terrible corner case when you rolled out the change on 50% capacity, and lose all of it so fast that your magic automatic rollback is pointless because your infrastructure is burning.
> This is fucking bananas. For nearly a decade, Facebook has been at the forefront of innovating how code is deployed at global scale. They presumably have gradual rollouts, automated rollbacks, anomaly detection, not to mention (I assume) loads of organizational safeguards in place to ensure this sort of thing never happens.
> Something else happened. This was not a configuration issue. Edit: If it was, I'd expect a post-mortem post-haste.
I've worked on automation projects at a large scale, and Facebook uses an unusual and clever method to deploy their software: BitTorrent.
I can only speculate about why FB went down yesterday. But if you understand that it's being deployed via BT, you can see that there's the potential to have a lengthy rollback window.
IE, this isn't like uninstalling a single RPM; this could have impacted a significant fraction of their fleet of systems, across multiple datacenters, and if so, the amount of data they'd need to move to rollback could have been tremendous.
I totally agree with all this, and I'm completely open to a valid technical explanation here.
My initial comment is/was admittedly a bit reactive, and more so to the general tone of their explanation than the likelihood of a legitimate technical explanation. This wasn't one service -- every product was down for nearly 24 hours, and their explanation is basically, "uh, yea, it was a...um...configuration issue." The terseness of that explanation, in my opinion, is insulting to the millions of people and businesses that rely on facebook to get information and operate their businesses.
It would take a computer the size of facebook to perfectly plan how a change will actually affect facebook.
Nobody actually spends the money to do that. They all wing it at some level or another. They're just winging it at a scale vastly more massive than the hundred or thousand computers most people manage.
Source: I worked at Amazon back when managing 30,000 servers was a lot, and I can extrapolate.
How did you determine that Facebook leads this space? I recently read an article about how Facebook distributes RPMs internally and it struck me as the kind of thing an insane person might have invented fifteen years ago. I mean, NFS in front of glusterfs? Also, RPMs???? Talk about bananas.
It has a bunch of well built and supported tooling, including dependency management, dependencies, and versioning /s
Nothing. I quite like using them to deploy applications. If you package them right and build your deployment system correctly, they're not the worst way to do things.
I'm mostly guessing based on what I've read over the years. They've published a hefty corpus of work regarding their deployment infrastructure and greater code review/quality approach.
If not a configuration issue, care to speculate what actually happened? There could've been malicious intent, whether on the part of an internal or external actor --and it really could be either-- given the amount of criticism Facebook has drawn in the past few years. But perhaps Facebook's PR statement would've addressed that, had it been the case.
We likely won't even have an internal post-mortem for at least a couple weeks. There's no way you can possibly expect a full breakdown of what went wrong at this scale less than 24 hours after it gets resolved.