The thing that really sucks for the TV show is that it wasn't even their own employee. It was an employee for their data hosting company. You know... the guys who you're supposed to outsource to to make sure that this stuff doesn't happen.
Actually, big corporations outsource to transfer risk and not because it's cheap. To compliance/legal departments, paying exorbitant fees to qualified vendors is saving money in their risk analyses.
Exactly. If you don't have expertise in house, then it can be cheaper to hire that task out to an outside company rather than hire someone to manage it in house. This sure sucks for the TV show, but at least they have someone else to hold accountable. If this had happened and they handled their own IT, there wouldn't be anyone but themselves to hold accountable. And you can bet that means a lawsuit with damages.
How so? They were negligent. Beyond that, you're also assuming the client didn't have lawyers--there's no way this contract gets signed off on without the vendor absorbing the risk.
If all the legal work was done properly, the vendor would end up in court with their insurer.
Yeah, well you see how well it worked out in this instance. They transferred risk to a party incapable of mitigating that risk.
I think maybe the TV producer's lawyers think that suing the internet host into oblivion will get their TV show back?
If this was indeed a risk transfer outsourcing agreement, everyone who was involved in vetting and approving the hosting contract should be fired and possibly sued for negligence.
Unless "back up our data" is part of the hosting contract, the host is off the hook.
There's up to two fools here:
- If the hosting company does not have a disclaimer in their contract that the data backup onus is not on them, then they deserve to be sued for all they got.
- The TV producer should keep their own backup. Trusting a third party blindly with your most precious data is simply stupid, and unworthy of anyone involved with IT in this day and age. I would back up data even if the host did have a contractual obligation to back up my data. Shit happens, it's not a matter of "if", but of "when".
Redundancy - by definition - cannot be supplied by a single company.
Yes, your vendor may have their own redundant systems in place so that their internal problems don't become your problems. And that's nice. They should. But you always have a Plan B. And you always keep a local copy.
What I'm saying is that whatever is in the contract is what matters. Data hosting, web hosting, data storage, whatever terms are used matter little.
EDIT: Case in point: We probably all agree that dropbox could be called "data storage", but they specifically say that backups and backup costs are to be done by the client (http://www.dropbox.com/dmca#terms), not by them.
Surely the data company has redundancy or backups or data recovery software available? Surely they considered the risk of a staff member deleting copies / a hacker / a fire.
Surely the producers have at least working copies somewhere (I can't see people live-editing on a cloud server). Surely they considered the risk of the company going bust / data loss / their own staff deleting the copies.
Would assume this was an April Fools thing it's that bad (except for the fact it quite obviously isn't due to the quite obvious reputation damage that will effectively destroy this company).
>"According to a lawsuit that was filed last week in Hawaii District Court, a man named Michael Scott Jewson was terminated from CyberLynk. From his parents' residence, he allegedly accessed CyberLynk's data and intentionally wiped it out. Jewson is alleged to have been charged in February with a federal computer crime violation and admitted his guilt in a plea agreement.
The data breach allegedly knocked out 6,480 WER1 electronic files, or 300 gigabytes of data, comprising two years of work from hundreds of contributors globally, including animation artwork and live action video production."
These sort of reports often over do it with the use of "allegedly" as an attempt to mitigate any libel claim.
"Jewson is alleged to have been charged"
Surely it's a matter of record whether he was charged. If the reporter can't even confirm this then they don't have a story.
"The data breach allegedly knocked out 6,480 WER1 electronic files"
Again they don't need that "allegedly", it's presumably a reported fact, the allegation they have to be wary off is the one that says the person responsible.
While I agree with you on your first point (that the reporter shouldn't have to use the word "allegedly" if he/she can confirm the man was charged), personally, I think using allegedly in the sentence about the files being knocked out is warranted. "The data breach" is referring to the data breach the man *allegedly carried out.
TV news reporters != attorneys. So when in doubt, attribution is your best friend.
The data breach he allegedly carried out is reported as a known quantity. The only allegation that needs mitigating is that it was this person that carried it out. If one is unsure of the details then it would be "reportedly" or an "unconfirmed source stated" or similar, no?
The problem is that the reporter doesn't know that the details of the breach mentioned in this article are facts. The figures, that there were 6,480 files totaling 300 GB of data likely came from the prosecution. So the reporter needs to get himself off the hook and either attribute it ("the prosecution says") or use words like "allegedly," "reportedly," etc.
No, I think what trafficlight meant was: how can there be only a single copy of this material? What about the media they actually shot the episodes on? Surely there must be some kind of copy on the cutting room computers? It looks as if they're mainly out to collect damages from the company.
The data breach allegedly knocked out 6,480 WER1 electronic files, or 300 gigabytes of data, comprising two years of work from hundreds of contributors globally, including animation artwork and live action video production.
Sounds like they hosted everything in the "cloud". Consider what happens when you edit a Google doc, there is no offline backup just whatever Google provides. So sure they are not at zero, but plenty of stuff can and was lost.
That's what they claim, sure. But does it really work like that? It seems to me that there would have to be local copies of those videos, if for no other reason than synching production quality videos to the cloud in realtime is probably unrealistic. Maybe some HNer working in the TV industry can elaborate?
From what little video editing I've been involved in, we had source material in 2 or 3 different places, plus multiple edits of said material, plus rough edited timelines, plus the final product.
And keeping everything in the cloud would be insane. In our shop every video editing workstation had it's own RAID array just for realtime editing. In a large shop you'd have a SAN of some kind.
Good production practice isn't that different from good practice anywhere; 3-2-1 is baseline. That is to say you keep - at minimum - three copies of everything, two local and one remote.
(Warning anecdotal evidence)
Having worked in China on a number of startups and worked alongside people who have had to fire employees, it's often common practice to just lock fired employees out of their systems. I've had this happen to people I know and had to do it once when letting a programmer go (which is nearly impossible to do after a probationary period due to the labor laws in the country, it's very very difficult to fire somebody, the company has to provide reason, pay social benefits, or face the employee in a court--forget it if you're a foreign company).
I have heard stories of letting people go to find out that all of the work that employee did for the last month or two deleted, source missing, machines just wiped clean.
I read that title and kind of laughed because, well, it is a seemingly commonplace response in some parts of the world.
In my case I gave a months notice I was resigning and was locked out immediately. Imagine how much it sucks to come into work everyday with nothing to do except go to lunch. When I wasn't offering the other developers advice on the code I had worked on or APIs I was familiar with I was just shooting the breeze.
This was the hole in the policy. It was designed to keep disgruntled employees from damaging the system but the ones leaving on good terms were treated exactly the same.
This is more/less common in certain industries, and perhaps partially based on regulatory issues too, although I'm guessing there.
I have worked on financial software the majority of my career; at one point in a Wall St. investment banking firm. The very explicit protocol there was if you were going to leave the 2 week notice was customary, but you would probably be locked out of everything instantly, and would be around for hands-off consulting at the whim of your manager for the duration. It was all handled very professionally and friendly, at least for me, and they let me keep working for the 2 weeks since anything I touched had a LOT of layers to go through before it ever saw production data.
Same industry for me, but in the trading area. It was standard for pretty much all firms that we were escorted off the premises immediately. I was touched that they let me stay for half-an-hour to say goodbye to people ;). Like you, it was done professionally and friendlily.
This is one thing which can happen when you put all your data into "the cloud" and hope for the best. One risk which any business has to take into account is the possibility of a rogue employee or ex-employee running amok. Putting all your eggs (or data) into one basket, especially when that basket is administrated by people who don't particularly care either way what happens to your business, may not be the wisest strategy.
I assume this is them: http://www.cyberlynk.net/ with the late 90s "new" graphic against off-site data backup. I hope the show producers sue someone, either Cyberlynk if they claimed they did redundant backup or their IT guy/company for setting them up with such a bum deal.
2D Animation doesn't take up much space. Beyond a few painted high res background, most files are small and a lot of things are procedurally generated and take up almost no space at all. I worked on a 2D animated movie 10 years ago, and the data needed to render the final movie weighed in at less than 100 GB (and that was 2-3 years of work by 30-40 people). Also I understood from the story that they didn't loose everything, just enough to make it impossible to recreate the episodes without basically starting from scratch.
That employee may be going to jail, but it's not his fault that this happened. If your fired employees have access to your servers, you fucked up the firing horribly. Similarly, if one data center going offline can take out your entire season of work, perhaps you need to learn what redundancy is.
This is wrong; and is a class of error which I see frequently, especially in political comment; both from the left and the right.
The underlying incorrect assumption is that responsibility is mutually exclusive.
In many cases, responsibilities are divided cleanly between people, but this is an organisational convenience, not a moral law. There are cases where it is inescapable that more than one person has the same responsibility. Consider an army sargeant: he may assign a task to a soldier, but still has the responsibility for seeing that it is done. This is true for all leadership positions.
Much political discourse revolves around who has responsibility, and frequently one sees it argued that A is not responsible because B is, often in cases where this is incorrect. Eg, employer versus employee, individual versus government, etc etc.
The error is made in both directions, depending on the political beliefs of the arguer: an individual is not responsibly because the organisation is, or the organisation is not responsible because the individual is..
A particularly dumb one, in my opinion, is the argument that government is not responsible for something, because individuals are. In my view, all the valid responsibilities of government are derived from responsibilities of the individual: the responsibility to defend the nation is derived from the individuals responsibility to protect him/herself and family, etc.
This is called Fundamental Attribution Error. As the name suggests it is a foundational part of human social psychology and it has been studied for years.
Personally I'm convinced it is one of the cognitive flaws that are needed to stop us from going teeth-gnashingly insane after ten minutes exposure to reality.
Obviously. But the cognitive mistake is the same. This is not me getting on my soapbox, it is a point of view held by psychologists, it's there in the literature.
Fine, I'll rephrase. If the hosting company took the simplest possible precautions, then this would not have happened. You can't trust people that don't work for you.
Responsibility is one of those concepts that we humans made up and spend a lot of time uselessly arguing over. Other examples are whether an action was "justified", whether someone "deserves" something, who gets the credit for stuff, etc.
Examples I'm less sure about: Who "owns" something, whether something is "clean" or "dirty", whether two things are the "same", whether something is "normal" or not. I collect these; if anyone has more to share I'd love to hear them.
I really don't get this philosophy. I've seen this a lot recently in various discussions on HN, and I'm honestly dumbfounded. How do you justify this line of thought, that negative action against someone, done deliberately, is not the perpetrator's fault?
I mean this completely academically, so please don't take it as an insult (I'm genuinely curious) - but was this perspective something you were raised with or did this develop out of experience / general personal exploration of the world?
As a producer, I can safely say that your entire job is based around one rule: the show must go on. That means you have backups. And your backups have backups. And your backup's backup's have backups.
Is it your fault when bad things happen? If you're good at what you do, probably not - especially when you consider the AMAZING number of things that can go wrong on any given day. However, it IS your fault if the bad things that inevitably happen cause the show to fail because you had no plan B.
Maybe it's a disgruntled employee with a set of passwords he really shouldn't have. Or maybe it's an outbreak of anthrax (yeah, I was on a job where that was suddenly an issue). Perhaps you're shooting on a beach and a dead body randomly washes up, suddenly turning your carefully chosen location into a crime scene crawling with cops and coroners (another true story / very rough day.) God forbid it's something really awful, like a bike messenger carrying an important batch of media getting hit by some asshole who was answering his phone instead of paying attention to the road, landing the kid in hospital, and turning your package into collateral damage (horrific moments like this, by the way, make the job seem suddenly unimportant).
Regardless, the show must go on. And if you have a good producer, it will. But losing TWO YEARS worth of work because one data center (effectively) went offline? That's seriously embarrassing. Also, very bad for credibility. And yes, reason enough to loose your stripes.
I think any professional can agree that if you're the guy at the top making decisions, everything is your responsibility. Taking the fall for something that goes wrong comes with the territory.
That's why every CEO is responsible for everything that goes on in their company, whether or not they knew every little thing that was going on. It's their responsibility to know, and put processes in place to ensure that they know, and if something still slips through the cracks, own up and take action to fix (or take the fall if it's a big enough issue).
What I think people are missing is that this doesn't absolve the perpetrator of moral, legal, and professional responsibility for a clearly malicious act. So two parties are at fault and should experience the pain in different ways. Fact remains, they should both experience pain.
I can certainly understand the perspective that ultimately the responsibility for the loss will have to be borne by the folks who didn't do their homework or didn't take enough care in their backup strategy.
However, that wasn't the assertion that I was intending to respond to :) I've seen this "its not the perpetrator's fault" argument used several times in the last few weeks, the most recent that I remember was the kid who hacked the PHPFog site. The same argument was used there, and indeed was spouted off by the kid himself - it wasn't the hackers fault the site got hacked.
This line of reasoning seems dangerous to me, as it obscures a criminal or unethical activity by the ultimate result of that activity. Wrong action is wrong action, regardless of who didn't cover their bases.
Should the producer take more precautions, and will they ultimately be burned? Most assuredely, but lets not forget the reason they were burned in the first place - somebody maliciously acted against them.
Totally agreed. The incompetence of the producer is a problems for the producer's backers. It has no bearing on the obvious wrong done by the ex-employee.
A co-worker of mine once wiped out the entire CVS repository at the company we worked for because a symbolic link was placed in his home directory and he thought it was ok to 'rm -Rf' its contents. We were able to build a new repo based off of production and everyone who had changes not in production had to re-introduce those changes, but it was not the developer's fault for wiping out CVS. It was because the sysadmin had set the symbolic links for the primary repository. If you enable developers to do something bad, it isn't always their fault when they do it.
Having intent to destroy something on purpose is a different matter though, and I'm not sure that there is a good argument for it.
Did the sysadmin also forget to create a backup for the repo? I don't think it's a stretch to assume that the developers were somewhat negligent in working on a single repo that can disappear at any time.
There was no backup of the CVS repository. The production server saved us since we were able to treat it as the definitive backup. This was before Git and the concept that everyone had their own copy of the entire repository.
There is a sort of half intent by the developer. Yes, the developer had intended to delete the repository that was in his home directory, but he had no idea that it was actually a link to the master repository.
The developer had hard links or symbolic links? It must have been a hard link to do that type of damage. I find that really dangerous and that's why I never use them. Unless you are using an obscure version of UNIX, rm -rf should delete the delete and not the content of the folder it is linked to.
A hardlink would be safer in this case. If there are two hardlinks to the same file and you rm one of the hardlinks, the other hardlink still works and the file itself isn't deleted.
A symlinked directory, though, being made victim of a recursive rm command, just might result in rm traversing through the directory and deleting everything inside of it.
I wouldn't say that it's not the employee's fault, but if you are suggesting that it's other peoples' fault as well, I'd definitely agree. In an incident like this a lot of people have to have been very negligent.
Somewhat different situation, but after having quit one a Silicon Valley startup after its acquisition, I still have access to a lot of its network. I'm sure it's a common occurrence. Shutting people completely out of a network is really hard, where there are so many different accounts to close. (Unix access on all boxes, mysql, cms, email, bug-tracking, apple dev accts, irc, support forums, etc. etc.) Especially doing so without deleting data (which you might need in the future, to check on something that the employee worked on, or an old email containing documentation, etc…)
Rather, I think the issue is the lack of a good backup strategy. Off-site, automated backups should have been in place. I've seen someone `rm -rf /var/mysql` and, after a heart attack, recover it. Your system should be rm-proof!
You may want to find a way of notifying them without alerting them it's you. Otherwise if something goes awry down the road and one of your accounts is involved (say an old username that gets brute-forced), you don't get something pinned on you.
1. Don't use per-machine unix accounts. Use something like NIS or LDAP. Now you can easily disable someone's account on all boxes. THe bonus is that their home directories still stick around (though you can't use aliases like ~username to access them).
2. Use an email system where you can disable a user without deleting all of their emails.
What I'm saying is that relying on crazy people to not go crazy is not a good data loss prevention strategy.
(In the end, putting someone in a situation where they'll do something bad is worse than being the person that does something bad. A good example is the Colgan Air crash from a few years ago. Anybody knows that when your plane is stalling you're supposed to push the control column forward and apply full engine power. The plane does the pushing the control column forward part automatically! But for some reason, the trained airline captain did the exact opposite, stalling the plane and killing everyone on board. Why? Because he got paid so little that he couldn't afford a hotel room to sleep in before his work that day, and had to commute across the country and then work a full shift. This is the airline's fault for impairing someone to the extent that he killed 50+ people by doing the exact opposite of what any trained pilot would do. $300 for a hotel room and everyone on that flight would still be alive today.
Similarly, when you fire someone that works with critical data, you need to make sure the guy's access is revoked. It's a precaution that needs to be taken, as this case shows. When necessary precautions are omitted, bad things happen. Planes crash, and important data goes away forever.)
What I'm saying is that relying on crazy people to not go crazy is not a good data loss prevention strategy.
Except companies that believe this are impossible to work for, there is so much process and policy to prevent anyone doing any harm that no-one can get any work done either. At the end of the day, you have to sort out all the trust issues before you give someone the root password, then you just have to trust them to get on with it.
"What I'm saying is that relying on crazy people to not go crazy is not a good data loss prevention strategy."
Upvoted for exactitude. Also, it's worth remembering that virtually no one hires already disgruntled employees. That's not to excuse anything that the truly disgruntled do to retaliate. But it does suggest that an abusive employer represents a risk to everyone relying on their 'teams'.
EDIT: I don't think the pilot was retaliating, by the way. What he was subject to is a different kind of abuse.
Separately, when two parties are in a fight, it's always worth remembering who started it. If blame needs to be shared, and it's unfair to share it evenly, the unprovoked aggressor deserves a special measure of approbation. Typically, this is the party with more power, not less, for the simple reason that it's a lot harder to abuse a lack of trust and authority.
I understand your point but the difference in the Colgan case is that the pilot made a tragic mistake, whereas the disgruntled employee deliberately wiped the show. Yes, his employer is careless for not revoking his permissions, but the question remains - how is this not his fault?
Because when the company has to man up, and tell the reason of the Fuck up to the stakeholders, the should not be pointing fingers to a person who was not even on their payroll.
It was someone's job to avoid this from happening, or they do not have people covering such cases... either way, someone other (person or the company as a whole) than the fired employee messed up for the business/customers and they are liable/answerable.
Because people fail... all the time. Good, Smart, Sincere people fail. (not in this particular case, though)
This is just how things are. We all have heard numerous stories of rm -r / or drop database or something similar. Hell, the whole widely accept idea of "bugs" in software industry is based on the fact that people WILL do something wrong. Not because they are bad people, want to do bad things, but to err is human.
So your process should not be designed around the idea of people doing the right things always.
You judge without knowing enough details to do so. For all we know, the contract with the data center may have included regular backups, including off-site backups. In such a case, as a company, you've done everything you could and should have and the data center is entirely to blame, if the employee managed to destroy the backups as well.
This is surprising, it would be like sitting my web apps code/ database on one live server without a local copy, and possibly multiple backups elsewhere. How can a TV production company justify this? I mean it would cost just about nothing relative to production costs to put 300gig on S3.
You're assuming that the production company can afford to have someone around who knows how to put data on S3. My guess is that they outsourced this to the company who had the fired employee. Still, they should still have some kind of original files on some other medium somewhere... but who knows, maybe their production workflow revolved around this data-host. In which case they are really borked.
We have tons of decades-old video alive and well today. They may not have expertise with S3 but if they don't have the expertise to ensure robust backups, they're doing it wrong.
Yeah they may not but you have the weight up the costs of getting a contractor in to setup an almost fail safe system against losing probably millions of dollars of production.
I would assume it's more about people assuming something like this would never happen as to not justify the outlay of protecting against it.