Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you implement one single function, then it's easy to refactor later on.

If over 10 years, dozens of people add feature after feature without thinking carefully about the structure of the code, you get huge messes that are very hard to clean up. Then suddenly an old, unsupported dependency needs to be replaced (e.g. because of a vulnerability) and it takes weeks to remove it because there is zero separation of concerns anywhere in your codebase.

You say that overabstracted code is hard to change. I agree, but so is underabstracted code - or simply code with bad abstractions. I don't care particularly about SOLID, since I'm not too fond of OOP anymore, but encapsulation (which is a concept independent of paradigms) is definitely crucial when you want to make sure a code base is maintainable in the future.



> You say that overabstracted code is hard to change. I agree, but so is underabstracted code - or simply code with bad abstractions.

I am not against abstraction, but the attitude that the person's colleague should've *definitely* abstracted away this minuscule amount of lines immediately even in their first implementation is not just abstraction, it's premature abstraction and the kind of dogmatic thinking that leads to the problems I outlined. The blog post scenario is not unique; the correct solution is almost always to leave the code less abstracted and only abstract it when there are actual proven use cases.


Exactly : "premature abstraction and the kind of dogmatic thinking".

This thread is filled with nit-picking this code, when the biggest problem was actually team communications and overall discussions about future direction. Many young programmers will spin their wheels on these types of things, but then sales/marketing might be like 'dude, this has a 1-2 year life span at most'. Or, this is being replaced by product-x next year.


> dude, this has a 1-2 year life span at most

If I had a nickel for every piece of "will be turned off in 6 months, tops" software that my workplace still maintains a decade later, I'd probably double my salary.


> If I had a nickel for every piece of "will be turned off in 6 months, tops" software that my workplace still maintains a decade later, I'd probably double my salary.

Definitely this. I've seen it, too, and I'd wager the vast majority of IT professionals have seen it, too.

At one particular company I spent a decade at, I was brought in specifically to help in the decomissioning of a particular set of legacy databases. One particular database was still on life support because a particular trading desk wouldn't move off of it, and since they made a shit-ton of money for the firm, they got a free pass on it, while IT constantly being dinged every year for the legacy database still being alive. Competing mandates between a profit center and a cost center, the cost center always loses. Which is fine, but then don't blame the cost center for not being able to make their goals because the profit center is dragging their heals. (Which was _not_ done at this firm).


Why didn't the IT folks step in and provide the labor for the conversion? This helps them accomplish their goals. Tading desks love free labor. This shows a novel way in which cost centers can attribute their costs to lines of business. Finally, this demonstrates leadership and initiative by the IT department and heads--especially if they have to fight an uphill battle about IP, and they succeed in convincing the desk.


There's no definites in software development but if a formula is repeated 10 times you probably have a good name for it and at that point it probably should be in a function.


> There's no definites in software development but if a formula is repeated 10 times you probably have a good name for it and at that point it probably should be in a function.

I don't think your take makes sense. The example in the blogpost states multiple methods have 10 lines of math, but even the author mentions they were similar, not the same. The use of the weasel word "similar" already tells you that it wasn't the math that was shaved off. In fact, the supposedly brilliant refactoring that the blogger did was change the whole interface without any good reason, and with bad object-oriented inheritance chains and mixins to boot. What a mess.

Still, the blogger tries to claim this is clean?

There's a good reason why the blogger was pulled into a meeting and gently forced to revert that mess back to its old state, and why the blogger decided to depict a hero journey in a blog post where any sort of counterpoint is left out.


We don't really know what the actual implementation looks like, so I find it hard to make a definite judgement. I think the refactoring introduced by the author probably wasn't good, because it looks like a very brittle abstraction. But I find it hard to believe that among those 10 lines of maths each, there aren't any functions one can extract that form some natural abstraction (e.g. "translate vector" or whatever), so that every handle function ends up being maybe 1 or 2 lines instead of 10.


> But I find it hard to believe that among those 10 lines of maths each, there aren't any functions one can extract that form some natural abstraction (e.g. "translate vector" or whatever), so that every handle function ends up being maybe 1 or 2 lines instead of 10.

That's besides the point, isn't it? Just because you can in theory extract functions that doesn't mean you should, or that your codebase will be in a better shape if you do. There's the issue of coupling, and there's the issue of introducing dependencies, and there's the issue of whether the apparent duplication holds the same semantics and behavior across the project. I'm talking about things like does it make sense to change the behavior on all 10 code paths if you need to change it in only two or three? Having a surface-level similarity between code blocks doesn't make them the same behavior, does it?


If you're translating vectors 10 times, or calculating cosines, or whatever, then yes, you should abstract this into separate functions. It's not like "translateVector" or "calculateCosine" are brittle abstractions or anything.


> If you're translating vectors 10 times (...)

You don't know what was being done. All you know is that the manager took a peek at the PR and told the developer to revert the commit.


The code should be flexible. The first iteration wasn't, simple, this is the code juniors in high school write. It always leads to a mess later on. Now, if you are a beginner like you seem to be, then it's a good code. Sure.


I just want to point out that this following line could come across as condescending, even if not intended that way.

> Now, if you are a beginner like you seem to be, then it's a good code.

You don't know anything about the person you are replying to except what they posted, and assuming someone with a different view (which I happen to agree with from my experiences) automatically has less experience (instead of acknowledging that others may have good reasons for their views which one hasn't considered or run into) doesn't reflect well.


The first iteration is plenty flexible where it needs to be. I've been programming since 2001 and honestly, the reason I'm holding this view and you're not is very likely the exact opposite situation of what you think: I think it's hard to find experienced and knowledgeable programmers who think the best time to abstract something is the first iteration of something.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: