I'm pretty sure they had around six months give or take a day or so of oh shit in Intel. Of course, Intel may have actually simply broken the glass on a dusty old plan of action "In the event of ..."
I have had the misfortune of having to pull them out twice in my career - in both cases they offered little in the way of guidance for the particular situation that came up.
The set of unknown unknowns that are typically missed make most of them unless in all but the most narrow of cases, because many companies write them, and then forget them. Especialy if they are as large as intel.
Very true, although I'm glad to say I have not had to break out one of my own yet for real. My first experience of a full on DR test was pretty humbling - NetWare servers backed up by the Unix troops via Legato. It turned out that the backups were good but restored at a pathetically slow speed (no reflection on the Unix systems but I suspect the Novell TSAs were a bit shag at the time). We updated "time to restore" estimations and moved on, after adding one or two other results of lessons learned.
Do test your plans (this is not aimed at you personally zer00eyz - you probably know better than most).
There are a lot of unknowns but the basic model of a real DR plan is pretty sound these days, if you can afford it or wing it in some way. An example:
Another site, a suitable distance away. On that site there is enough infra to run the basics - wifi, a few ethernet ports, telephony etc. There should also be enough hypervisor and storage for that. Some backups are delivered there as well as on site. Hypervisor replicas are created from the backups (or directly) depending on RPO requirements and bandwidth available. The only thing that should be able to routinely access the backup files is the backup system (certainly not "Domain Admins" or other such nonsense". Ensure that what is written is verified.
The company in question had a rather large on site server room (raised floor, fire suppression) and a massive generator to deal with any power issues as well as redundant connectivity. This room was literally the backup incase their "real" data center went off line.
The problem is that the room was "convenient" so there were plenty of things that lived ONLY there (mistake one) -
When the substation for the office went, and the generator started everything looked fine. The problem was that no one had ever run the generator for that long... after a few hours it simply crapped out (over heated, problem two).
A quick trip to home depot got them generators and extension cords that let them get the few boxes that were critical back up - however one box decided to not only fault, but to take it's data with it.
This is when I got a rather frantic call "did I still have the code from the project I did?" - they offered to cut me a check for $2000 if I would go home right then and simply LOOK for it.
Lucky for them I had it - and the continuity portion of the DR plan got revisited.
In hind sight after I said I had the code, I probably could have asked them to put another zero on the end of the check and they would have done it just to be a functioning business come 6am.
I didn't even have to show some leg to get you to recant the dit.
Thank you - I'm happy to listen to (nearly) everything.
"I probably could have asked them to put another zero" - ahem that's not the IT Consultant's Way exactly. We have far more polite ways of extracting loot. We are not lawyers and should have morals.
"I have a file with 900 pages of analysis and contingency plans for war with Mars, including fourteen different scenarios about what to do if they develop an unexpected new technology. My file for what to do if an advanced alien species comes calling is three pages long, and it begins with 'Step 1: Find God'."