> I think you're wrong to ignore the consequences of the actions as an input to the punishment.
This equivocates on "consequences" of actions, though. It's obvious that the consequences of hitting F7 before the incident were understood by all responsible to be low enough that any intern could be expected to make the right decision. After the incident, the consequences of hitting F7 were sharply increased such that no future intern would ever be allowed to make that decision. But then you can't make an argument that assumes "consequences" were the same at both points in time.
We make this fallacy all the time probably because we're designed by evolution to reassess the morality of an action based on consequences. It works as a social heuristic for shaming or rewarding people but it makes no rational sense that the morality of an action should retroactively change based on future consequences. You can see similar behavior in our rewarding athletes for profound genetic advantages, or punishing criminals for profound genetic deficits. The consequences somehow redeem or condemn, and they should do neither.
No, the consequences were the same before and after the incident: a total system reboot. The varying factor here was temporal: it was usually a low-risk action when the office was empty, a high-risk one when the office was full.
The negligence on the intern's part was to make decisions and act without regard for risk as if he was in the low-risk window despite the evidence he was actually in the high-risk one (all the already-active PCs).
It makes perfect sense that the punishment should reflect inappropriate regard being given for known consequences. That's what negligence is.
> No, the consequences were the same before and after
I'm talking about the perceived consequences, not the actual consequences. The fallacy here is to perceive low consequences at one point in time, perceive high consequences at a later time and then try to change history such that low consequences were never really perceived.
> The negligence on the intern's part was to make decisions and act without regard for risk as if he was in the low-risk window despite the evidence he was actually in the high-risk one (all the already-active PCs).
He was in a perceived low-risk window. The perceived consequence of accidental reboot was already figured in and was already perceived to be low. Else why would the F7 key be next to F6? It is certainly unfair to expect someone to perceive high-risk when everyone else perceives low-risk.
> It makes perfect sense that the punishment should reflect inappropriate regard being given for known consequences. That's what negligence is.
The perceived consequences were low-risk, therefore the known consequences were low-risk.
... because he was negligent. "Oh, all the computers are already on? That only happens when Washington's waiting on something. Oh well, I'll carry on like this was any other low-risk morning"
> Else why would the F7 key be next to F6?
Same reason why "rm -rf " is the one keystroke away from disaster. Perceived risk has nothing to do with it.
> "Oh, all the computers are already on? That only happens when Washington's waiting on something. Oh well, I'll carry on like this was any other low-risk morning"
Because those situations were also low-risk mornings. He only saw that pattern when people left late. He had no reason to expect that people would be working early in the morning because that situation had never occurred. Further, a secretary playing a computer game in the morning suggests business as usual, no one working.
> He was in a perceived low-risk window. ... because he was negligent
No, someone else set up the computers and software with F6 and F7 command functions side by side and then evaluated the entire network as low-risk for interns under all situations. It is perfectly reasonable for an intern to take the same low-risk perspective as his superiors.
> Same reason why "rm -rf " is the one keystroke away from disaster. Perceived risk has nothing to do with it.
Perceived risk has everything to do with it. It is inconceivable today that an intern would have unrestricted access to a company's file system and be literally a few keystrokes from disaster. The key reason for that is because perceived risk now is much closer to actual risk. In 1983, no one had a clue about the kinds of things that could go wrong. Understanding real risk is a painstaking process requiring time, trial and error.
This equivocates on "consequences" of actions, though. It's obvious that the consequences of hitting F7 before the incident were understood by all responsible to be low enough that any intern could be expected to make the right decision. After the incident, the consequences of hitting F7 were sharply increased such that no future intern would ever be allowed to make that decision. But then you can't make an argument that assumes "consequences" were the same at both points in time.
We make this fallacy all the time probably because we're designed by evolution to reassess the morality of an action based on consequences. It works as a social heuristic for shaming or rewarding people but it makes no rational sense that the morality of an action should retroactively change based on future consequences. You can see similar behavior in our rewarding athletes for profound genetic advantages, or punishing criminals for profound genetic deficits. The consequences somehow redeem or condemn, and they should do neither.