Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, those are important too. Now that you have some application written that solves a valuable problem then how do you assess quality/value objectively? The keyword is objectivity. You have to measure something and compare those measures against something else.

This is why non-developers believe developers are generally hyper-autistic. For most developers everything must be about clear, easy, simple, safe. These are all super subjective self-serving opinions that don't do anything for the product or the labor that builds that product. Product owners will scream about this, developers will pretend to hear it, will immediately discard it, and then repeat the same insanity where they find comfort.

Step back, take a deep breath, understand that it's not about you or what you want, and finally discard the self-serving circular insanity and only focus on measuring your time to complete a task, time for the application to do things, and frequency of user engagement.



I get your point you are being pragmatic, but the reason why I sometimes behave exactly in the way, that you just criticized are those very same product owners.

Example: I have a talk with the PO where we agreed on certain features and certain things that do NOT have to work. Often times when I make decisions during development that rest on the asumption that these certain things dont have to work I later on get told to incorporate them anyways. So I have lost a lot of trust in POs or anyone that is not a developer that tells me how a piece of software is supposed to function.

Example 2: I am currently dealing in my department with a case where a Product Manager talked to a Product Owner and they contractually aggreed with a customer to deliver one of our internal development tools, that are absolutely not ready for production or were ever meant for any customer.

Yes I will be "hyper-autistic" about my code because sometimes I do not have a choice.


Goodness is fundamentally a subjective attribute. Trying to measure it objectively is attempting alchemy. You cannot make the subjective out of the objective - you will always have smuggled subjectivity in somehow.

Suppose I say that good code should have the lowest number of curly braces, or the lowest number of subroutines, or the flattest or deepest object hierarchy. A measure being objective doesn't make it good. So why is test coverage good?

In fact, every single one of the metrics proposed in the parent post is something I have seen gamed to the point of maladaptivness.

* execution performance time - So frequently overly focused on that you can find google hits for "Premature optimization is the root of all evil." I have seen developers spend hours or days saving (sometimes imagined!) run time, where all the time they saved over the life of the product wouldn't add up to the time spent. Extreme pursuit of performance leads to code that is hard to work on - who cares about those milliseconds when critical new features take months to write, or are even impossible?

* build time - It is very easy to diminish build time by destroying critical architecture. I have done it myself. :)

* regression frequency - Insisting on only making very safe changes is how you wind up spending six weeks lobbying change control boards for one line of code.

*defect quantity - In environments that actually track this, people merge issues into a single "fix what's wrong" ticket, degrading the utility of the ticket system. Defect granularity is not actually obvious!

* code size - Obfuscated X contest entries are often very compact, and people who obsess on saving lines and characters can wind up leaning towards that style.

* dependency quantity - Leads to attempts to homebrew encryption.

* test automation coverage - Automated testing ossifies design and architecture, which can paralyze, e.g., an experimental prototype. Full coverage is also costly - time and energy spent maintaining pointless tests can come at the expense of mitigating more realistic risks. I realize I depart from prevailing wisdom in this, but there are times and places when automated testing is simply inappropriate.

* test automation execution duration - Sometimes the right way to write a test is slow.

I'm not disagreeing that these are generally good things to strive for. They are! I'm saying that if you think these things define goodness, each one can lead you to a cursed place. (I hasten to add that there are times when a metric really does define goodness - sometimes you need speed or reliability or whatever, at any cost. Recognizing that circumstance and its limits - "any cost" does not generally mean any cost - is subjective.) Goodness is subjective, and while objective measures can help you assess it, such measures cannot define it - when and how you use which measures, and when you think they've gone off the rails, is itself a judgement call.

I once inherited a system that was both essential for business operations and a thorn in everyone's side. The guy I inherited it from (and the guy he inherited it from) had taken over a year to learn how to use it. I set about reorganizing, rewriting, documenting, abstracting - all those soft changes in pursuit of clarity and obviousness. They aren't objective, but they do pay off: when I handed the system off to the next guy (and three more after him!), he was off and building on it in a day. That was how I knew I had succeeded! When doing that sort of thing, you do always wonder if what you're writing is clearer for everybody or just you. But surely even the most hardheaded bean counter can see the value of training developers in a day rather than a year. That's good. :)

Goodness is contextual and subjective. I can agree that your goals are generally right, and I can point to circumstances where they're overemphasized or even outright wrong. Sometimes, when the sky is falling, a nasty little bash script that meets none of the usual criteria for "quality" is the best possible thing.

There are people who use subjectivity as a haven for vanity, and build mountains of pointless code in pursuit of some idea of goodness that serves no practical purpose or is even harmful. It is important that we retain our on ability to criticize on subjective grounds, precisely to counter that sort of activity - because you will find it in the land of the objective advocates as well, building mountains of metrics that don't serve any practical purpose either. To recognize a bad abstraction and a bad metric is the same skill, and requires the same confidence in your own good judgement.

Objectivity is no refuge from the necessity of good taste.


> A measure being objective doesn't make it good

That completely misses the point. It’s not about what’s good. It’s not even about what’s better. It’s only about how much better, the distance between theirs and ours. Better is objective only when like aspects of competing items are compared within accepted bounds of precision using evidence.

The interesting thing about measuring stuff isn’t that people are otherwise entirely wrong in their assumptions more than 80% of the time, but that they are typically wrong by one or more orders of magnitude.


>> A measure being objective doesn't make it good

>That completely misses the point. It’s not about what’s good

The loss of context here makes me wonder if I am talking to an AI. The comment you originally replied to was,

   We know what a good batting looks like but we still can’t say what good code is in any reasonably objective way.
You replied with, "Sure we do" and proposed a list of metrics.

Are we tracking? The original comment claimed that we do not know how to objectively measure goodness in code. You are (apparently) claiming to know how to do it. I am claiming it is impossible even in principle.

In this context, I find your response ("It’s not about what’s good", and the claim that "better" is easier fo measure) bizarre and nonsensical. Like an AI, you seem to have lost track of what we are talking about. We are talking about whether we can objectively measure code being good. It is exactly "about what's good".

Of course it is possible to measure things about code, but equating those measures to goodness relies on artificially constrained circumstances -- like a code golf contest or a PO declaring test coverage a metric to maximize. Athletes find themselves in such constrained circumstances all the time because it is a pursuit dominated by competition and games! It is most obvious in sports like sprinting or powerlifting, which are analogous to something like code golf, but even sports like basketball in which "goodness" is harder to define are heavily artificially constrained such that goodness in a player is about maximizing an objective measure - team score. This might be analogous to a programmer who sees his mission in terms of ticket closed per week. By contrast, programmers in general are usually working in a context in which the quality of what they produce is measured by a lot of complex impacts - on users, on business, on other programmers.

Some of what code needs to accomplish - conceptualizing a problem well, communicating clearly - is inherently subjective, having to do with how it is received by another mind. Programmers (myself included) are generally focusing on these sorts of characteristics when talking about code in isolation, partly because we feel the impacts to ourselves most keenly. But I would contend that there is something deeper and less obvious here - that maximizing this subjective goodness profoundly improves the situation in more objective areas. Well architected code, well communicated code, clear code is resistant to defects in a way that mere test coverage can't accomplish. This is not obvious, but it is deep wisdom arising from experience, and is part of what drives programmers to emphasize the ineffable in their understanding of goodness.

In fact, I was originally going to draw a parallel between being a good basketball player, maximizing team score, and a programmer maximizing business revenue. But I stopped myself, because this is an element of the deep wisdom of good code: focusing on ineffable, subjective excellence is profoundly positive for revenue. It's not something you can measure directly so much as it defines the circumstances the business finds itself in. This observation isn't unique to me - here's a pg essay making a similar point: https://paulgraham.com/avg.html Programmers talking about code being good in this sense may only be building sandcastles in the air - but they also may not be. You must possess the wisdom yourself to tell the difference. But there is certainly more to it than than selfish vanity.

But even leaving that aside, because code must meet a broad array of conflicting demands, optimizing among those depends both on the circumstance and the values held by the people in it. Hence, goodness in code will always have a subjective element, and (in the athletic sphere) is most like saying you are "healthy and fit" or "your best self". You can certainly bring measurements to bear, and we are certainly talking about something real, but there is an inescapable subjective dependence on the value judgement of the judge.

This actually touches on a broader philosophical debate: Are value judgements mere meaningless personal preferences, or are they (often imperfect) attempts to articulate something real? In code, and in life, I believe the subjective is pursuing the real, and moreover that anyone who thinks it's worth arguing about intuitively agrees with this assessment. By contrast, the view that only the objective is real, popular as it has been for the last couple of centuries, and attractive as its promises are, has been increasingly producing absurd results.


You are really wanting this to only be about code quality, whatever that means, and not product quality, which is something that can be measured from outside the organization by people with no understanding of your skills. I can repeat all day that it’s not about what you or other developers want but I somehow suspect you will circle back to code quality and goodness because those are important to you. That is why I claimed the prior comment misses the point. The complete inability for developers to accept, on any level, that it’s not about what they want, I believe, is why non developers stereotype developers as autistic. They aren’t wrong.


There are several things in this comment I find odd.

The comment about developers being autistic is particularly funny because I personally am autistic, and my wife (who you are responding to) is very much not. It was a major source of tension for us for many years.

Likewise, the emphasis on people "outside the organization" and the suggestion that she is somehow deficient on that front is laughable -- she's extremely highly regarded by customers/clients, and has been for decades, specifically for her ability to understand and solve their problems by getting them a great product whether or not they have any idea what her actual coding ability is.

And then the comment about "really wanting this to only be about code quality" is strange, since the subthread going back to kasey_junk's comment is about how "we still can’t say what good code is". Your initial response is entirely about measurable attributes of code (like code size and build time); Dove introduces "softer qualities" including "solving a valuable problem" (which is more about "quality product" than any of your metrics), and then you go back to a different set of metrics. She responded with an extended comment that specifically noted the importance of "complex impacts - on users, on business, on other programmers". Her comments have consistently been more about how good code impacts the functional product, while yours have been about measuring things about the code, but then your complaint is that she's too focused on the code and missing the point. Then you make comments about her own state of mind: what she is "wanting" to do and what you "suspect" she will do and her "complete inability" to accept certain things.

This very much feels like you just have a point you want to spike about measuring things (FWIW she does make it a point to measure things and to train her subordinates on making sure they're measuring things), and her comments (and my other one) are more excuses for you to repeat your point than actual ideas you're trying to interact with. Like you're not actually interested in engaging with the core idea that "Some of what code needs to accomplish - conceptualizing a problem well, communicating clearly - is inherently subjective, having to do with how it is received by another mind." It's just another opportunity for you to say that your set of objective measures are the only thing that matter, which, as I noted elsewhere, is itself a subjective position about which things to value.


> "the view that only the objective is real ... has been increasingly producing absurd results."

As you rightly noted a couple of comments back, what this view does is it smuggles in subjective assumptions. That is, someone operating under this view is going to objectively measure something (like kloc or number of tickets closed or execution time on a test data set) but the selection of what to measure, and the selection of how to value each individual measure of an objective quantity in order to determine overall "goodness", is subjective. The step where they assign meaning to a measurement is a subjective step.

It's interesting to watch the development of "objective" measures in basketball and the dialogue around how to determine if a player is the best, most valuable, etc. over time. Decades ago, the only stats we had were "counting stats" -- points, rebounds, and assists. Steals and blocks came a bit later. There is a correlation between putting up big counting stats and winning games, but it's not as strong as you might naively suppose. Once more sophisticated metrics were developed, something that "subjective" observers had always noticed ("losing player with good stats" is something that was often said about specific players) started to be quantified: some players put up big stats because they're doing inefficient things that result in individual stats at the expense of the team, like taking a high volume of shots even if they're lower percentage shots than a teammate could get on that play, or not contesting an opponent's shot but trying to chase the rebound instead (leading to more opponent scoring but also more personal rebounds over the course of a large number of shots.) In the modern era, advanced stats like PER, VoRP, WS, and BPM are basically more sophisticated models built on top of counting stats that try to scale them and weight them according to regression models. These stats are better, but they still don't capture everything, they only capture things that can be inferred from counting stats. They don't capture things like -- Steph Curry has such strong "shooting gravity" that his teammates often have extra space to shoot because multiple defenders are trying to make it hard for him to get a good look, or Rudy Gobert being on the court changes an opposing team's play selection because he's such a good shot blocker that plays that would lead to a bucket against a different player are leading to him getting a block so teams avoid those plays. Someone insistent on "objective measures" won't even consider these as things to potentially care about unless they have a way to measure them (which, now that we have sophisticated player-movement-tracking, we can actually measure things like how close the nearest defender is on shots by a Curry teammate when he's on court vs off court, or what percentage of opponents' shots are taken in a specific part of the court when Gobert is defending them vs when he isn't. So those measurements are coming online over time.) And, of course, understanding that it means something that Curry's teammates have extra space to shoot, or that Gobert's opponents might not be running their strongest offensive plays because of his shot blocking, puts us in the realm of meaning rather than mere measurement. Knowing to make the value judgment of "it matters how this player is impacting the game in ways we don't have a good numerical measurement of, but that a sophisticated observer who values those things can watch for and give subjective consideration of" puts us in the world of meaning rather than mere measurement.

That seems to be the same issue underlying this discussion. Knowing that conceptualizing a problem well matters -- and that it will profoundly impact the end result in objective areas even though it's not directly measured -- is wisdom.


Ohhhhhh shit, that has to be the longest paragraph in all of HN. Have you seen the movie Moneyball?

Just start with the premise that bias isn’t helpful and even less helpful when implicit. Let that determine what to measure and will not be harmed. If such decisions twist you into knots then you are the person qualified to make such decisions.


> "bias isn’t helpful ... Let that determine what to measure"

Determining what counts as "bias" is itself a subjective activity. People who grow up in different cultures have different baselines for what factors matter the most and what factors they consider to be overvalued, undervalued, inappropriately accounted for, and so on. Not just different countries, but different subcultures within the same country (like, my cousins from the farm see a lot of things in society as biased toward big cities, which I never considered because I've lived in big cities for essentially my entire life.)

"start with the premise that bias isn't helpful" is, by the way, also not a measurable goal, which IMO supports what I'm saying. Knowing how to conceptualize a problem well (of which "eliminate bias" is a small subset) isn't something you can objectively measure, but it's something that will impact your objective measures down the line.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: