Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If someone else is more qualified or better able to make a more informed decision than you are, then let them!

That is called delegation of responsibility. If you are not qualified to make a certain professional decision or if such a decision exceeds your delegated responsibility then you need to bring somebody else in, possibly a supervisor. That consideration is itself a decision and it should not be delayed.

https://en.wikipedia.org/wiki/Delegation

> Furthermore, how do you know that a wrong decision early is better than putting a decision off until a future time?

Exactly. You have no idea until you can determine the result. If you fail early and have the awareness to recognize that failure you are in a far stronger position for all forthcoming decisions. If you wait to make that decision all the time between now then remains an unknown and you also aren't completing any resulting actions. So not only does risk increase but your productivity proportionally decreases.

It makes sense to wait on a decision only when that decision is both critical and costly and even then the only advantage on waiting is to perform further investigation. There is no advantage in hesitation only for the sake of hesitation, because then you are allowing external factors to influence the conditions away from what you currently know.

> The hard part about making decisions often isn't just the decision itself, but also in understanding the risks and trade-offs of making a potentially wrong decision now

This isn't something that software developers are well equipped for. Other industries have specific processes and criteria for evaluating risk in a uniform and standard way as well as the processes to apply the proper controls. These are basic trade practices that are often affirmed through education and licensing with bodies of enforcement. Software, on the other hand, does not have a commonly recognized ethics enforcement/evaluation body to define these trade practices. In practice most of this is outsourced to security professionals, for example people holding one of the various ISC2 certifications: https://www.isc2.org/ethics/



> That is called delegation of responsibility. If you are not qualified to make a certain professional decision or if such a decision exceeds your delegated responsibility then you need to bring somebody else in, possibly a supervisor.

Delegation more often happens toward subordinates than supervisors, but I don't see how that's relevant. My response here was to your statement that, "If you are afraid to associate a decision with your reputation then somebody will make the decision for you." I was pointing out that someone else making the decision for you isn't necessarily a bad thing as you seemed to be implying.

> That consideration is itself a decision and it should not be delayed.

You seem to be arguing here just for the sake of arguing, even if it means arguing against your original point. By the same logic you're using here, the decision to procrastinate can itself also be a decision, which is actually the entire point of my post.

> Exactly. You have no idea until you can determine the result. If you fail early and have the awareness to recognize that failure you are in a far stronger position for all forthcoming decisions. If you wait to make that decision all the time between now then remains an unknown and you also aren't completing any resulting actions. So not only does risk increase but your productivity proportionally decreases.

Your reasoning seems to rely on the premise that you can only learn by making a decision, which isn't true at all. You can also learn by research, which you have more time to do by delaying a decision. Even without research, there are many things you can learn just by allowing more time to pass and more events to unfold without any deliberate action on your part.

An example of this could be something like submitting an application that's ready today for an RFQ from a large potential customer that isn't due for another month. Then, 2 weeks from now you find out from an acquaintance who used to work for that customer that they always valued some other piece of information being included in the RFQ submissions they review. I can't tell you how many times I've run into similar situations, and procrastinating improved the quality of the final result.

Similarly, most of the time, you'll only ever have more information available to make decisions the longer you wait, not less.

> It makes sense to wait on a decision only when that decision is both critical and costly and even then the only advantage on waiting is to perform further investigation. There is no advantage in hesitation only for the sake of hesitation, because then you are allowing external factors to influence the conditions away from what you currently know.

You're arbitrarily asserting here that external factors can only influence the conditions away from what you currently know. External factors can also influence conditions toward what you currently know. Furthermore, the longer you wait, the more you'll know, which means the more informed your decisions will be.

> This isn't something that software developers are well equipped for. Other industries have specific processes and criteria for evaluating risk in a uniform and standard way as well as the processes to apply the proper controls. These are basic trade practices that are often affirmed through education and licensing with bodies of enforcement. Software, on the other hand, does not have a commonly recognized ethics enforcement/evaluation body to define these trade practices. In practice most of this is outsourced to security professionals, for example people holding one of the various ISC2 certifications: https://www.isc2.org/ethics/

I have know idea what point you're trying to make here or how it's at all relevant.

At the end of the day, I've had enough good decisions come as a result of waiting to make them when I can to know that what you're prescribing isn't a universal truth, which was the point I was trying to make clear.

I've also had plenty decisions turn out to be the wrong decisions, which I would have known if I'd waited another day or two to make them, further reinforcing that quick decisions aren't always better than delayed decisions. Again, the hard part is figuring out when it's better to make a quick decision and when it isn't.


> Delegation more often happens toward subordinates than supervisors

When a subordinate lacks sufficient delegated responsibility they need to call in a supervisor. Delegation flows downward from an authority. When the delegated responsibility is exceeded the authority needs to be involved.

> Similarly, most of the time, you'll only ever have more information available to make decisions the longer you wait, not less.

In my experience that is a commonly believed blanket assumption that rarely occurs in practice. Risks tend to increase over time in the absence of controls enforcement.

> Your reasoning seems to rely on the premise that you can only learn by making a decision, which isn't true at all.

There is a lot of psychology on this. Yes, you can learn some things from books, but the ability to apply what you learn almost entirely takes practice and repetition. There is a world of difference between reading a right answer from a reference and formulating that right answer as a spontaneously novelty from experience. I am not discounting the value of education, but you would never hire a medical doctor who has never touched a scalpel or a trial lawyer who has never performed at trial.

https://en.wikipedia.org/wiki/Psychology_of_learning

> Furthermore, the longer you wait, the more you'll know, which means the more informed your decisions will be.

This is a logical fallacy called reaffirming the consequent. You cannot know if a decision will be more well informed by waiting unless you have waited and reflected upon that delayed decision. This is a reckless line of thinking since a given risk profile with change during that period of waiting regardless of whether the assertion holds true.

https://en.wikipedia.org/wiki/Affirming_the_consequent

> I have know idea what point you're trying to make here

Assuming risks are an unchanging constant and that the only thing that changes is the information you receive and also assuming there is never cost associated with a delay then yes it would make sense to procrastinate on many decisions, as much as possible, in the hope you will become supremely well informed, but this is extremely unrealistic.

> isn't a universal truth

Its a matter of making better decisions and reducing risk over time. Its not about universal truths. That's what religion is for.


> Delegation flows downward from an authority.

Yes, exactly. You originally described delegation as flowing upward toward authority (so the opposite) when you said, "That is called delegation of responsibility. If you are not qualified to make a certain professional decision or if such a decision exceeds your delegated responsibility then you need to bring somebody else in, possibly a supervisor."

> In my experience that is a commonly believed blanket assumption that rarely occurs in practice. Risks tend to increase over time in the absence of controls enforcement.

Agree to disagree then. I've found that this occurs frequently in practice. I've already given examples.

> I am not discounting the value of education, but you would never hire a medical doctor who has never touched a scalpel or a trial lawyer who has never performed at trial.

In typical hospital settings, patients are assigned to doctors, and they can have little experience; it's called residency. Secondly, are you saying you'd rather work with a doctor with no education than one with no experience? There is a reason that doctors and attorneys with little experience are allowed to practice, but doctors and attorneys with little education are not.

This does not even include the fact that doctors and attorneys are _constantly_ reading and researching from studies (for diseases and treatments) and case law (for precedents), respectively, for which they have little to no experience.

> This is a logical fallacy called reaffirming the consequent.

No, it's not. What I said has nothing to do with reaffirming the consequent. I don't even know how to respond to this non-sequitur.

> You cannot know if a decision will be more well informed by waiting unless you have waited and reflected upon that delayed decision.

Yes I can, because I'm constantly learning. At any given point in time, I have more information than I did at any prior point in time. So, everything is always more informed the longer I wait. How much more well informed, or how relevant the additional information varies.

> This is a reckless line of thinking since a given risk profile with change during that period of waiting regardless of whether the assertion holds true.

Reckless means with a lack of caution or consideration for consequences. My original response concluded with, "The hard part about making decisions often isn't just the decision itself, but also in understanding the risks and trade-offs of making a potentially wrong decision now, versus making a better informed decision at a later time, and deciding which is the better approach for any given decision that needs to be made."

I would argue your advice is more reckless, since it's advocating less critical thought on a per-decision basis by prescribing a one-size-fits-all methodology for decision making in your original statement that, "It is better to make a wrong decision early than to put that decision off to a future time."

> Assuming risks are an unchanging constant and that the only thing that changes is the information you receive and also assuming there is never cost associated with a delay then yes it would make sense to procrastinate on many decisions, as much as possible, in the hope you will become supremely well informed, but this is extremely unrealistic.

Whether risks are constant or variable doesn't change whether or not a decision should be made quickly or delayed, so I'm not sure what you're saying here. Constant high risks could mean a quick decision is better, while constant low risks could mean a delayed decision is better. Likewise increasing risk could mean a quick decision is better, while decreasing risk could mean a delayed decision is better. Either scenario could go either way, which is exactly why my original point was that you should evaluate on a per-decision basis which is actually better, instead of following a dogma that one is always better than the other.

I also already gave examples where delayed decisions made sense, and they did not involve any of the assumptions you claim to be requirements for a delayed decision to make sense. There doesn't need to be no cost, there can be little cost. You don't need to be supremely well informed, you can just be better informed.

Yes, your reductio ad absurdum argument (speaking of logical fallacies) would be extremely unrealistic as you pointed out. Thankfully, none of those things are necessary for a delayed decision to potentially be better than a quick decision.

> Its a matter of making better decisions and reducing risk over time. Its not about universal truths. That's what religion is for.

The one-size-fits-all prescription to decision making in this original statement is the universal truth I was objecting to:

> It is better to make a wrong decision early than to put that decision off to a future time.

I was pointing out that this isn't always true (i.e. not a universal truth).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: