While the author may never go back and fix his code later, that doesn’t mean that everybody else does the same.
If you never have to come back to “fix it”, was it actually wrong to begin with?
Personally, in my own career, I’ve found this “come correct” mindset is used to justify unnecessarily flexible solutions to allow for easier changes in the future … changes that 90% of the time, never actually materialize.
I held this mindset too when I was younger. Then I got tired of seeing how all my clever abstractions never actually got used the way I intended, and decided to get smarter about it.
> If you never have to come back to “fix it”, was it actually wrong to begin with?
A dead simple and effective heuristic is to improve code a little bit each time you touch it. The code you touch a lot gets love and ends up nicely designed. The code you rarely touch won't be as nice, but that's fine, because you rarely touch it.
If you're going to make a big functionality change or addition that will take a while, that's a great time to significantly refactor the existing code. The change will provide additional design context for the refactoring, which you wouldn't have had earlier.
Of course this strategy requires you to have good tests. If you do a refactoring followed immediately by a functionality change, and you don't have good tests to verify that the refactoring is solid, you'll have trouble attributing bugs to the refactoring or the changes. If you don't have good tests and basically test in production, then you'll want to refactor ahead of time so you can deploy the refactored code and shake out the bugs before you start building on it.
> Of course this strategy requires you to have good tests.
Not necessarily, if you also follow another similar principle. Leave the code a little neater than you found it, but also leave the code a little more well tested than you found it!
Could you distinguish between "good tests" and "a little more well tested"?
It seems to me that good tests enable code to be well tested. Here, "good tests" would be verification of behaviour by using the various expected and unexpected input and output.
> If you're going to make a big functionality change or addition that will take a while, that's a great time to significantly refactor the existing code
and so here comes the trade-off question: what if such a change could be made faster/with fewer people or resources, but at the cost of not doing the refactoring?
It's more that you're spending a lot of time reacquainting yourself with that code; so the price of tweaking it is low. They'll be some return, even quickly from the changes; but mostly, it's the cheapest time to refactor - if you're ever gonna refactor, this is when to do it.
You have to ask what motivates the refactoring. It should speed up work in that part of the codebase, either by making the change faster or by reducing the time needed for follow-on bug fixes (which in my opinion is the same thing.)
What's nice about making a refactoring right before a major change is that it's much less speculative. Your estimation of the value of a refactoring is only as good as your prediction of what changes will need to be made in the future. If you already know exactly the change you are about to make, you can justify the value of refactoring with much more confidence.
By contrast, if you're worried that the change in front of you will be easy, but future changes will be hard, then maybe it will be best to leave the refactoring until just before the future changes. After all, they might not come, and if they do, they might not be what you expect.
There are two reasons to do a refactoring now to account for non-immediate future work. I think only one of them is actually about the future, and the other one is really about the present.
The first reason, which really is about the future, is if you do know what future changes you will have to make. For example, if you know that such-and-such future functionality will be required to support promises being made to current or prospective customers. Or if your company actually has a product roadmap that it sticks to. Then you know the refactoring will pay off eventually, and you can make a firm case for doing it now. You should discount the value of the refactoring to account for any uncertainty about the future work.
The second reason is that bug fixes after the release will be easier if you do the refactoring first. I think this is the same as saying that the current change can be completed faster with the refactoring. Releasing something with a bunch of bugs doesn't make it "done." It's done when the engineers who are doing it can move on to other things. If you're stuck fixing bugs from the initial release, you aren't really done, so it wasn't done faster. Product and sales will often tell you that there are crucial strategic reasons that releasing something now with a bunch of bugs is better than releasing it a few weeks later with fewer bugs, but they're almost always lying^H^H^H^H^H suffering from tunnel vision on their own goals. Your engineering manager should escalate, and nine times out of ten the business does not actually want you to push out a piece of shit a few weeks earlier. They will tell you to descope or delay the initial release. If you count bug fixing time as part of the time spent making the upcoming change, then the refactoring becomes justified.
Yup. I regularly come across my own notes "This isn't the best way to do this because X, Y, Z. Need to address later."
Years later I'm in that code and realize that X, Y, Z never ever happened (even if it seemed highly likely) and that block of code was working just fine an folks found it easy to work with... I was dead wrong about being wrong.
I don't think there is anything wrong with this. You documented the assumptions made when you wrote it, so years later you know them and can say with confidence that you made the right choice. Much less guesswork. Worst thing was that you were a little rude to yourself.
Reminds me about a part of code I wrote quickly as a POC, it made it's way into Production unchanged.
It somehow ended up being the most stable feature, perhaps because it wasn't trying to do too much.
Just what it had to do and nothing more.
Flexibility is never unnecessary. Extreme flexibility feels amazing and will lead to serendipitous jumps in your productivity where you implement cool new useful features you never thought of before just by combining things you already wrote in new ways.
The problem is that what people think is flexible design is actually either dead weight or a brittle inner-platform. Adding fields because you might need them later, adding an interface without a need on the consumer side, moving something to a configuration file instead of a constructor parameter, imagining up needs that no consumer will ever actually have, etc. etc. All of these try to achieve flexibility by "adding more" - more layers, more configuration, more fields, more abstractions, whatever. Indeed it's better to ignore flexibility than to try to be flexible in these misguided ways and fail miserably.
But there's a third option: extremely simple, terse, clear code that follows proper design principles (not "patterns") from top to bottom. Code that never asks for more than it needs to do its job. Code that makes the fewest assumptions possible. This is flexibility via removal -- removal of assumptions, of preconditions, of responsibilities. When you identify a unique need your code has, you concretely express that need as simply as you can (e.g. a small interface), but you don't implement it. Your code just does its one simple job using its simple needs, and you don't worry about whether or not a concrete implementation of your needs actually exists. If you never get around to implementing one, then you never needed that code you just wrote to begin with, and you delete it.
It's perfectly possible to write code that is about equally "correct" as your requirements are, and that is clean and extremely flexible, the first time, without ever having to go back and "fix it later". It just looks nothing like what everyone seems to think "flexibility" looks like.
You basically just explained good software engineering. Code that does one, well-defined, necessary thing and does it well.
If you want to "do a thing", a DoThing() function is the optimal way to represent it. You can't get any more abstract than that, and you don't need to.
Somehow a lot of (badly explained) ideas of "design patterns" and "abstraction" have rotted people's brains into thinking you're supposed to add a whole bunch of extra layers everywhere.
100%. The easiest code to change is code you haven't written at all. It is orders of magnitude easier to add features to small, simple projects than big, complex ones.
> Personally, in my own career, I’ve found this “come correct” mindset is used to justify unnecessarily flexible solutions to allow for easier changes in the future … changes that 90% of the time, never actually materialize.
Worse, after 10 years, flexibility is required in an unforeseen dimension and a rewrite is needed anyways.
In other words, when the basic assumptions of that fancy abstraction are just not workable with the future requirements, you're hosed. Worse, now you might need to refactor a lot of code building on this abstraction.
That's why I prefer composition wherever feasible. Easier to repurpose when the crystal ball is not working correctly.
After 30 years programming, I still agree with this take. The crystal ball is inconsistent. The one certainty is that you'll understand the problem better with time and experience. And your code is the easiest to change when its small and simple. You often have to implement the code wrong in order to figure out how to implement it right.
The OP mentioned they didn't assemble their bed frame for 6 months after moving in. Thats a perfect metaphor - writing software really is like furnishing and caring for a home. Code is both content (what your program does) and environment (where you do it). If you don't take the time to care for your home or your software, it'll become a mess in short order.
I like to think about gardening metaphors with good software. Sometimes we just need to tend the garden - remove some weeds, and clean things up a bit. You can tell when thats needed by gazing out at the garden and asking yourself what it needs.
Personally I'm not very good at maintaining my apartment sometimes - I bought some shelves that I didn't install for well over a year. So whenever I have that urge to tidy up and do some spring cleaning, I jump on that instinct. Last week I removed a few hundreds of lines of code from my project (because the crystal ball was wrong). It was a joy.
Not all code is the same. Who's using the code? You, your department internally, or your clients?
How many times does the code run? Daily, monthly, once and maybe never repeated again?
How resource intensive is it? Does it take a second or a month to execute?
Do you know from the start where you're going to end up? If it's a research problem, probably no. Then you don't need to prematurely optimise the code.
> Personally, in my own career, I’ve found this “come correct” mindset is used to justify unnecessarily flexible solutions to allow for easier changes in the future … changes that 90% of the time, never actually materialize.
If anything, I've learnt that code shouldn't be just correct, easy to read and reasonably easy to change... but also easy to throw away. Write code that contains the simplest solution that you can successfully get away with, without it becoming a problem down the line. And if need be, it should be coupled loosely enough to be replaced with something that fits the contract and passes the tests (provided that you have those).
An example of what not to do: an intricate hierarchy of "service" classes, which help you process some domain object and any other domain object type that you might want to handle in the future. It sounds good, but might have abstract classes, bunches of interfaces, as well as some methods with default implementations and so on. To understand how it works, you might need to jump around many files, even your IDE sometimes getting confused in the process.
A better example: a single "service" class that helps you process some domain object, with mostly pure functions, that are testable and self contained. You should be able to figure out what it does without jumping around a dozen different files and also replace just this one file. Need to process another entity type but aren't sure whether they're within the same context, or this logic could evolve separately? Just make another class as a copy. Realize that you've reached "The rule of threes"? Extract the common interface through your IDE later, as it becomes relevant.
Of course, it varies from language to language and even between different codebases written in the same language.
> If you never have to come back to “fix it”, was it actually wrong to begin with?
1. Ah, we often "have to", but also aren't allowed to because it would rock the boat too much and take too much time.
2. Unfortunately, too many coders share your attitude, and then other people have to work with what you guys left us - facing the consequences of making it do something which is different than the original settings, and being stuck with some jerry-rigged and inflexible set of assumptions. At least sketch out in comments what the better solution was supposed to be and where you cut the corners!
I've had the same experience in my career. I really do wonder how much of YAGNI is only learned through experience. Earlier in my career, I wanted everything to be more perfect, and I wanted to make sure I fully designed for all eventualities in my code. Nowadays, I know I can program myself out of a bad situation if it arises, so I don't try to cover all my bases, but I do make sure I'm aware of what could go wrong.
If I make a note to "fix it later", it just means, this could be bad, but might not be, so I might fix, but I might not either. If it ends up blowing up, it should be obvious to me at the point that I make the decision how bad it could really be, what the ramifications are, and how easy it is to fix or detect. With experience, you can play a bit more fast and loose with what you decide to do or not do, but it requires a lot of other skills, such as designing defensively, such that if something does go wrong, it's easily detected and doesn't cause irreversible damage to data.
I'm the antithesis of this. My devs thought when I said "we'll get to x later" meant it'd be nice one day. Then they kept getting lost when they'd come back and it'd be implement the way I said we should. They've learned to pay attention
Agreed that you shouldn't design for changes that aren't extremely likely to happen. Ends up overcomplicating things with abstractions on top of abstractions.
And then when a change does come it often isn't even expected and doesn't fit into the abstractions. But because everything is so convoluted, you either need to make a large code change to fit the new feature or you need to put a hack on top that completely invalidates the abstractions.
There's a saying in Hebrew that goes something like: "There's nothing more permanent than the temporary". Every single large code base that I've ever worked had an endless number of TODO and FIXME comments.
But I'm with you. If you think what you're doing is ok, don't leave a FIXME comment or plan on it getting fixed down the road. If it's not ok, just don't do it please ;)
It’s always entertaining to watch juniors start hung ho for their early career until their first major project gets sunset. It’s like they let so much of themselves go they can’t bear to do it again. For me, it’s like taking a shit, I don’t mind losing that part of me.
Here's a simple solution: change your comments (and your mindset) from vague "will fix it later" to a more specific "fix once <trigger condition> occurs". Ideally trigger condition is tied to automated alerts & metrics, but that's not strictly necessary.
You'll realize that most of the time trigger condition will not occur in the foreseeable future, or you don't even know what trigger condition is. Latter usually means you're planning to optimize because you want to, not because there's any business value in it. Stop thinking that it needs fixing because there's room for fixing. Businesses are all about tradeoffs, not perfectionism.
E.g. instead of "optimize this query in the future" write "optimize this query when we have customers with 1000+ orders", or "optimize this query when response time goes above 500ms", or even "optimize this query when N+ customers complain".
One other thing which I find very useful is to write tests which will start failing once the <trigger condition> changes. For example if today the database version I'm testing against disallows some query shape I can:
- write a test that verifies the failure
- add a todo comment in code linking to the test to fix the code when test fails
Then eventually the test will start failing, you can fix the code and the test and profit.
Good point. But that works only if the version number is the thing you want to test. And also it's possible that newer versions include the bug or some even newer version introduces a regression re-introducing the bug.
I'm a believer of user-facing testing - i.e. test things like how a user would operate them. So in the context of building a federated query engine our users won't care what version of Postgres they are connecting to - they will care however that the query they wrote doesn't work - doesn't matter what Postgres version.
I’ve tried this, fails in the discovery aspect imho. By the time the trigger condition occurs no one’s checking that comment that’s buried god knows where. You’re at most going to see it when something’s already broken and you’re investigating the causes.
Using jira cards with “depends on” relations work better if you already have one for the trigger condition, but even for that if it’s months or years away there’s a big chance that the trigger condition is done in a new duplicate card because no one remembers the old one.
What I usually do whenever I add TODOs and FIXMEs is create a corresponding issue with a short context description of why this came to be this way, what are the potential limits to the current code, and how I think it could be addressed, with permalink (or permalinks) to the line(s) in the repo, and add a specific label (or JIRA epic).
This way it allows to easily search and find all code entries corresponding to a specific case, quickly jump back in without too much reverse engineering someday in the future (by myself or someone else), and monitor debt growth by zooming in on the label with a search (or just looking at the epic).
"By the time the trigger condition occurs no one’s checking that comment that’s buried god knows where. "
You can make the trigger condition actual code and have it send you an email for example, or log to somewhere where you do check (depending on the project).
That’s better, but many trigger conditions are abstract and not implementable as alerts.
Consider for example “this code will only work under the assumption that all of our clients are US companies”.
Detecting in code that the company is aiming at international expansion soon is probably not possible, because if your code was generic enough to account for different countries you wouldn’t have this problem in the first place.
If you can tie these to automated alerts that's great. But there's no need to automate everything, "Do things that don't scale".
I make these comments searchable (they always start with "# TODO: "). And then once a month (or when you're bored) you can go through these and reevaluate them.
Yes, it's a non-scaleable manual process. And I absolutely don't care because it takes less than 30 minutes each month. No amount of effort invested in automation will ever pay off, so I'm pretty happy to do it manually.
I would love these TODO comments to automatically create issues in Github so that we could also discuss these, but unfortunately all of the Github Actions I've tried fail miserably when lines are moved or comments are edited.
"Detecting in code that the company is aiming at international expansion soon is probably not possible"
It is somewhat, if you also keep some general variables for this like "US_Only" that you have to keep track of, but yes, there are of course limitations with that approach. If you do it wrong, it gives just false security that blows up at the wrong time.
That's extra effort too, might as well fix it on the spot.
Also you might not have the data to trigger it in first place. If your todo is "optimize caching of this data" it's hard to see overall system hitratio to know when it might be worth it to start optimizing.
There are also dependency todos, "new version of the thing has that feature and this needs to be changed once we upgrade to it"
"That's extra effort too, might as well fix it on the spot."
It depends on the problem. Some things you do not just "fix on the spot", they take longer and sometimes you do have other, more urgent problems. Those are times, when I do something like this.
Occasionally I have had luck by embedding assertions in C code that fail when the trigger conditions are met… but mostly that’s not practical. I’ve never had any luck with the dependent ticket approach, those tickets always end up lost forever.
I experienced similar things, but not at work, but in life, about trivial things. I often told myself I would do it sometime after, but never. And then, I point a certain specific time to do it, I almost always do it. So I think if you set a exact time point, the probability will get higher.
My workflow is such that if something isn’t broken but I know it is somehow an issue (clarity, style, etc.) then I just save them in a notes file. Eventually a few weeks later I collect up enough of these small issues and then resolve them in bulk. I do this for CR notes when people write non blocker comments I say I will fix in a follow up.
My coworkers take the same view as the author in that they think you’ll never “fix it later” once the code is committed nobody will ever get back to it. But I prefer doing it this way because the overhead of constantly going in to correct my code for small unimportant things is a waste of time I would rather resolve them all in bulk.
Absolutely. My TODOs are notes to the future, when you come here to fix the fact that a specific situation isn't dealt with correctly, do note the following corner cases which is what made this difficult in the first place.
It's not necessarily a specific item that could be worked on right now. It may only be possible when other parts of the program have been restructured, or there is information available to a specific piece of the program which isn't there yet.
Have you ever gotten in trouble for not working on "ticket" items. There are some places that managers will jerk your leash for working "whatever you want".
I make it a point on our planning sessions and reserve a task here and there for things like this.
Or explicitly plan a task to be done in quick& dirty way and acknowledging we will another task to clean it up later.
Works great for me
At a previous startup we had “bug bash Fridays” where we would either let the devs find bug tickets in Jira that they wanted to fix or could self report problems that they had noticed. While most devs don’t seem to enjoy fixing bugs, this was shockingly well received and good for morale.
There are a lot of approaches to this. The way I'd approach something like this, is to find something fairly trivial and just do it. Then ask what you should do with it. I've always found starting a conversation about untracked, but completed useful, change and how to get it done "the right way".
I think the main reason why managers don't devote resources to fixing smaller tech-debt issues is that they can't correctly measure the value of such fixes or reason about them.
As an extreme example, the task of "Update the SSL certificate" is a low priority item that doesn't deliver any value, right up until about an hour before the certificate expires, at which point it suddenly becomes the highest priority task that everyone needs to drop all their other tasks for.
A slightly more realistic example would be the task of automating the certificate update process, or writing tests for that process, both of which might fail a short-sighted manager's heuristic of asking "Are there more important tasks right now?".
(Then, when something inevitably goes wrong due to the lack of automation or lack of testing, the manager blames whoever was supposed to be responsible for manually updating it, or blames the person who wrote the automation code but wasn't given the time to test it, while telling their own boss "There was nothing I could have done to prevent this!").
So, to align everyone's incentives, and make the problem less abstract, I propose the "jelly bean method". What this involves is having a jar of jelly beans, and whenever a manager tells you to ignore some technical debt to work on a "ticket" item instead, you should say "Sure thing, boss. That'll cost you 2 jelly beans". Then you take your jelly beans, and go work on the ticket, ignoring the tech-debt. However, when the next ticket estimation session happens, you bring with you the jar of jelly beans, and you weigh how full the jar now is, and use that as an inverse scale factor for all your estimates. For example, if the jar is half full, then you double all your estimates, and if it's a third full, then triple them.
Of course this isn't a scientifically rigorous methodology, and it won't make the estimates more accurate, but it should have the effect of making the manager and the team more aware of the long-term costs they are adding to the development process, which should make people less hasty to add tech-debt without accounting for it in some way. The system even allows for giving developers days or sprints where they can pay off the tech-debt: you just have to buy more jelly beans to fill up the jar a little. And even if the system doesn't have the desired effect and the level of tech-debt spirals out of control, at least you get to eat a few delicious jelly beans to take your mind off it.
Also, absence of problems isn’t a metric that leads to recognition from leadership (unless you’re inheriting a problem with the explicit mandate of fixing it). How do you attribute cause to the absence of problems? Would they have never existed anyways (in which case the opportunity cost was a bad decision)? Or do they not exist because of your efforts? It’s unfortunate, but the system doesn’t incentivize problem avoidance; only problem solving.
If you don’t fix it… and the code is working… what is the problem?
I leave notes like that sometime as indicators of intention… or invitation to the next reader… but if it never changes and is not causing issues… that’s fine, leave it as is.
Your code is like a home. If your home is messy, it changes how you feel in the space. If my home is messy I feel weirdly ineffective and sloppy. Tidying up makes me feel empowered and capable.
Don't leave mess alone. Tidy it up. A codebase you actively nurture will make you feel like change is easy. Its well worth the time investment.
On the other hand, if you spend all your time tidying and cleaning, you never get anything done.
There is such a thing as "too tidy", and while everyone disagrees on where the point is, companies tend to be very good at forcing that point through constrained resources.
I don’t think many projects are at much risk of that. Most professional code I’ve seen is a barely functional mess. There is such a thing as “too tidy”. But I can count the number of times I’ve seen it in my career on one hand.
I can not count the number of code review comments about missing periods at the end of comment lines on one hand. If you include comparably worthless comments, I can use the binary system and will still run out of hand...
Ah that’s fair. I have nothing but contempt for those sort of comments too. They feel like a sort of petty tyranny of small scale thinking. Good, clean software usually needs to go through a chaotic messy period to find its identity - and find what the core abstractions really are. Like, first get the big details wrong, then iterate until you get the big details right enough. Then focus on the medium details, then the small details. Then release. Doing that process out of order is at best a waste of time, and at worst a massive distraction. Our attention is our most precious commodity as programmers. Wasting it inappropriately is criminal.
If you want to move deck chairs around on the titanic, at least have the common decency not to drag the rest of us into your orbit of BS.
And the best way to manage your home is to separate the tasks into tidying, organising and cleaning. 3 separate and distinct actions. The same applies to code and projects.
Maybe I understand it wrong, but I can't agree with this analogy as is...
An organized home means to do things right away. If the laundry is done and dried, then I'll put them away (organize them) immediately. I don't wait for other things (eg. the dishwasher) that need to be tidied up. If I service my bicycle in one room and leave a mess, then I'll clean it up right away and don't necessarily include other rooms in that cleaning cycle. But on the other hand, if I spot some dust in one room, I'll just grab the hover and go through the whole flat.
Actions can be separate, and so can scope and effort.
I feel a house is best managed when effort and scope are low, so it is in my best interest to keep it that way. If my task becomes the action to tidy my home, then I know something went wrong.
> If you don’t fix it… and the code is working… what is the problem?
Because most of the time your hack only works by accident, or doesn't handle expected corner cases, or leaves out important use cases, or is a ticking time bomb.
If you’re regularly encountering or writing code that only works by accident or doesn’t handle important use cases, it isn’t the fault of a TODO. A TODO should document something that has been considered and would potentially be an improvement, but is not currently necessary. It isn’t supposed to be a flag that says “warning: I’m checking in broken code.” If the accompanying code is actually deficient, reject the PR.
> If you’re regularly encountering or writing code that only works by accident or doesn’t handle important use cases, it isn’t the fault of a TODO.
You're confusing a symptom with the root cause.
TODO items are symptoms of a problem, one which can and does often manifest in code that is brittle and only works by coincidence.
> A TODO should document something that has been considered and would potentially be an improvement, but is not currently necessary.
Not really. A TODO is just a comment someone decided to add because at the moment they spotted something that might require some work, but couldn't be bothered to track it as a work item. Just that, nothing more.
> It isn’t supposed to be a flag that says “warning: I’m checking in broken code.” I
Except it actually is supposed to be a flag that states "yeah I know this is broken code but trust me I'll get around to really fix it at some time between not now and never"
I've lost count of the age-old TODO items I stumble upon on projects I've worked on that served no purpose other than either stating "I think this needs fixing but I can't be bothered to track work items in a ticket" or "I just need my PR to go through, trust me bro I'll do this right once I remember to fix my shit"
This is definitely not the way teams I have been on have used TODOs. It better not actually be broken code. It means “this works but I have an improvement in mind that doesn’t need to be done now.” There is no way I or anyone on my team is getting away with TODOing code that is literally broken.
I’m now legitimately frightened about what some people are apparently labeling TODO. If that’s the way you use it, no wonder you’ve come to the conclusion it’s terrible.
JetBrains IDEs actually highlight two different types of comments:
// TODO
// FIXME
I think it's nice to have the distinction, with the former being a general suggestion, whereas the latter is perhaps more urgent.
Not that people's opinions won't differ greatly anyways. Some might set up their static code analysis to complain about every single TODO item and actively manage them with their issue tracking system, others just won't care.
> If you don’t fix it… and the code is working… what is the problem?
Two obvious problems: (1) more likely future bugs; (2) wasted time
Often "fix it later" means "de-duplicate this copy and pasted code later", where the copy is slightly different so there's some refactoring required to do that.
The future bugs come from changing one copy but forgetting to change the other. Sounds unlikely but very common in practice.
The wasted time comes from having to redundantly update both (or several!) copies. That's more work than it sounds because you have to understand the subtle differences in each, whereas if you had refactored them into one code base then the differences would be some obvious function/class/etc. Of course, each time, this update to all the copies is less effort than doing the refactoring, so the wasted time builds up gradually over time.
> If you don’t fix it… and the code is working… what is the problem?
It may be very inefficient, for example. Works now, but may unintuitively slow down when dealing with workloads that looked very far-fetched at the time.
I have inherited a really nice C++ codebase from 1980s that contains warnings for Year 2038 problem and all sorts of other potential issues, and I can't thank those engineers enough. So many times I've dug into a problem or started implementing a new feature only to discover that someone had already thought of that and left useful pointers in comments.
Whatever version control or note-taking applications they might've used instead, I doubt I could even run or open their files 30+ years later.
This is why I set up linting rules to error when people push "TODO" comments.
Either just do it, or put a work item in the place where you actually track work (Jira or equivalent) so it has at least a chance of getting done. Otherwise you'll end up with the typical 5-to-10 year old TODO comment from a completely different era that helps nobody.
If you're going to leave a comment, leave context around why the hacky solution was chosen at the time of writing, and why you didn't do the "correct" alternative.
This is a good way to loose valuable knowledge. People will just commit without the TODO. In fact many won't even have thought of the TODO in the first place, because they were not aware of a potential pitfall. Be happy if you get a TODO, if you don't it will work but not as well.
I find TODOs useful even if the fix is never implemented as long as there’s a proper explanation for the TODO especially when I’m reading the code as it explains why something expected isn’t happening. It was simply not implemented as opposed to being a bug.
The codebase I worked on for 12 years (before I quit recently) had some TODOs in the codebase that were put there before I was hired. They didn't harm anyone and they were interesting to see the direction that the people who originally wrote the code thought it should go.
IMO, a commit without a TODO is very very close to a commit with an untracked TODO. If somebody decides that the issue isn't worth filing a ticket then okay. If somebody isn't willing to file a ticket then it isn't like they were likely to write a detailed comment explaining all of the relevant context. "TODO - clean this up" is pure noise.
A loose TODO lying around the codebase almost certainly won't actually get acted on and there is no opportunity to discuss or prioritize it.
It really isn't. If you have anything relevant to add regarding your hack, post it in the ticket. That's the first thing that will be read when someone picks it up. Otherwise you're just polluting the code with good intentions.
What happens when the company decides to migrate from JIRA to something else? It wouldn’t take any special effort to migrate the TODO but in my experience the old JIRA is going to be stale and abandoned entirely in a year. No one will even remember or think to look back at it, and new hires won’t even know it exists.
> What happens when the company decides to migrate from JIRA to something else?
Aren't you grasping at straws? How many times do you believe a project changes it's ticketing system? So far I saw that happening a grand total of zero times.
Meanwhile, I've repeatedly worked on legacy projects which have decades-old TODO/FIXIT items, which serve no purpose other than being noise and serving as topic in water-cooler shit chat.
I think we're on our 4th or 5th ticketing system over 15 years. (We're also on our 3rd revision control system, but the code and comments obviously migrated seamlessly across both of those transitions.)
Open Office changing ticketing systems three times. The first time I copied over a few dozen of my most important bugs, the second time I copied over the absolute most important bugs, and the last time I did nothing.
Nobody was reading, triaging, or fixing them anyway.
Ticketing systems tend to increase in price over time. I know one nameless company looking to migrate after their current system increased in price. Open source isn't any cheaper, you still have to pay admins.
Though one other reason to migrate is the forms are complex and full of required fields nobody understands. Migration will get you past that to what matters, but only for a few years before the groups that required those fields in the first place come back and demand it with the same good but forgotten reasoning as the first time. If this is you, get your act together.
>> This is a good way to loose valuable knowledge.
> It really isn't.
Tickets, internal emails etc are exactly the kind of things that tend to get lost when codebase is sold, sublicensed or open-sourced.
If the codebase contains underwater cliffs, then those remarks are best kept as close to code as possible so that every time someone works with a particular class or function, they have right before their eyes that "this function will slow to a crawl when you have more than 32 767 files concurrently open". It's very expensive to find out such limitations post-fact.
You may very well call such limitations "hacked-together code", but if there has so far never been the need to have more than 10 files concurrently open (but there is a slim chance that such need may arise one day), then implementing support for infinite number of concurrently open files is once again just a waste of resources.
I see no reason why ticket system should be cluttered with hypotheticals like this.
Okay. What percentage of codebases have this happen? How often does this happen unexpectedly in a way that you care? Is any purchaser going to say "oh - but all of your TODOs are in tickets rather than comments so I won't buy it"? Is there no way of reflecting the internal tickets to github issues or whatever you are using to track the open sourced version of your codebase?
Often when I write a TODO, it's because it's because I'm not going to remember to write it down anywhere else. I'm certainly not opening JIRA for it, so this is as good as it's going to get.
Every once in awhile we grep through the TODOs and see if we should write some tickets from them. Works fine.
That's the universe telling you the TODO is useless noise that points to a workitem that no one, not even you, find it relevant enough to track or work on.
When that happens, do everyone around you a favour and leave out the TODO comment.
Nothing is important enough to be worth using JIRA for. When a company or team adopts JIRA it's the universe telling you that they've given up on doing anything valuable or useful. Trying to make such a codebase better is an exercise in futility; writing TODOs is as good a way to pass the time and collect a paycheck as any other, and any vestigial useful work to be done is more likely to be tracked in TODOs than in JIRA tickets.
This is a great way of introducing a bug. The TODO is only visible to somebody looking at the code. If somebody changes a caller to send over bars but doesn't check the comment then you've got a bug. If instead this was linked to the feature request to add support for bars then you've got the relevant context right there on the ticket and whoever is implementing support for bars is much less likely to miss it.
I write TODO for something that would be too disruptive to do as part of the current change but nice to have done. E.g refactoring a bunch of files - one of them should be renamed but that would add every file that refers to it to the change and make it harder to see the real work done. Next time I’m in that code with a simple change, I can submit a rename CR that doesn’t have anything else mixed up in it.
> I write TODO for something that would be too disruptive to do as part of the current change but nice to have done.
One more reason to not have a TODO item and instead track work on a ticket. Sneaking fixes/changes as part of other tickets mixes up the rationale for tickets and makes changes harder to track. If all you're doing is a cleanup then all the more reason why it should be in an independent PR tracked accordingly.
Where it will never be picked up, because rework tickets never get prioritized over feature work until something breaks.
And when some future engineer comes along and asks themselves why the hell this code is the way it is, they'll have to git-blame, dig through commits, find PRs for commits, hopefully find referenced ticket numbers, read those, and play archeologist to try to reconstruct whether there's good reason for the existing code's being from those indirect signals. Whereas a simple "TODO" could just tell said future engineer what they ought to know.
I sincerely appreciate a well-written TODO, and I make every effort to write good TODOs for others.
There’s definitely lost linkage of knowledge if you can’t reference the ticket from your editor, which for many if not most workflows means you never can. The loss is that an unadorned hack may have a corresponding ticket, with no way of knowing there’s even anything to look for. The workflow challenge is chicken-egg: people seldom file a ticket on proposed, unmerged changes, and most teams would balk at the concept without specific procedures in place; people definitely don’t go back after a change is merged to annotate a hack with whatever ticket was filed for posterity.
IME, a better solution is keeping the TODOs (or FIXMEs or whatever your preferred label[s]), with linter rules to require aging them so they must be addressed eventually, somehow. Even if you address them by removing them. At least then there’s some possibility of relinking them later, tied directly to the commit history.
I agree with your point about polluting the code with good intentions, however. And I agree with the article author’s point that most of the time you’ll not go back and fix it. Those points combined suggest that most TODOs should actually just be explanatory comments. In fact, as someone who writes very few code comments, I think a good heuristic for when to write them is my usual “does someone need this explained?” (either by my anticipation or by their direct questioning) plus “would I be inclined to write a TODO about this?”
I agree with the OP - you have immediately lost the linkage between the lines of code and the problem description - maybe you put a reference to the lines / modeule in the ticket but really, why bother.
You could do "TODO fix the foobar because flange see Ticket 1234" (and I have a todoinator to automate that but I think a ticket should be more meaty than a todo but perhaps I am fooling myself
You’re absolutely fooling yourself - a ticket is a unit of work. Trying to hide tech debt by keeping it out of the project management system is not beneficial.
The code base is the project management system - the dialogue between developers is the way code is developed - and the project management system is a pale copy of the real system used to keep non-literate people happy and feel they have some input.
Progress is measured by working software not closed tickets.
No one's trying to hide anything. The target users of project management systems, product owners and anyone with with a project manager title, typically neither care about nor understand the value of rework. In their view, the dev teams should be creating maximum value at all times, which they understand as either adding features or putting out fires. Rework tickets do not get prioritized until they're identified as the cause of lost value.
The inevitable question is "well why wasn't it written correctly to begin with?" even if the reason, as is typical, was pressure from product managers in the first place.
> You could do "TODO fix the foobar because flange see Ticket 1234"
No, you should create Ticket 1235 - fix foobar, add additional info such as rationale and the definition of done, and add Ticket 1234 as related/blocks.
Xcellent point - and we need ways to sync the two but based in code. I am tempted to try and build something that uses module and line number and bags of the comment to build a unique reference for the ticket to point back to - but I wonder about code movement and have never done it
And that ticket will be forever ignored. On the other hand, if there’s a TODO in the code, someone might fix it the next time they’re editing that part of the code.
A team that spends all its time grooming the backlog is not spending time on reading code, logs inputs, output, data etc.
We have fooled ourselves into thinking ticketing systems represent living documents but they don't - a high functioning team can have "hey I will fix the foobar by adding a cromulent adjutant" in a office (!) call or even an email.
A low functioning team knows no one will get round to fixing it
Your team should be grooming and prioritizing the backlog. If the tech debt exists but the business doesn’t value working on it, then yes it won’t get worked. But it also wouldn’t ever get done as some stray //TODO either.
Unless you're in a Scrum team and "The Product Owner is responsible for the Product Backlog, including its content, availability, and ordering." is taken to mean that the PO has the final say on what's prioritized. If your PO has a project manager title, as most do, and they see their career advancement is tied to shipping features on or ahead of schedule, then indeed, that ticket will be forever ignored.
Then don't make it "stray" - parse the code for todos and other lintable items - and yes some grooming is needed - but don't make the ticketing system drive the development- that's the tail wagging the dog, or looking at the receipt at the restaurant and thinking that's what caused the chef to cook the meal.
> If you have anything relevant to add regarding your hack, post it in the ticket.
Presumably your integration will automatically create a ticket from the TODO. There is already a linting automation, per the original comment. No reason to stop there.
> Presumably your integration will automatically create a ticket from the TODO.
Creating the ticket tracking a work item is known for creating the ticket that tracks work items.
The process is also the epitome of automation, because it requires zero automation to filter TODO items and only requires clicking on a button to create the ticket.
There is no salvageable excuse for this nonsense. Work items are created in tickets. TODO items just track copouts and noise.
The TODO serves as a ticket. If you have to interface with non-developers who can't function without pretty UIs then your automation can duplicate the information into a ticketing system, but otherwise a TODO is all you need; right in the place you want it.
If it is going to be one of those things you never intended to fix, then feel free to not mention it anywhere. Not even a ticketing system benefits from tickets you don't intend to fix. Something you don't intend to fix is noise anywhere it ends up.
A few years ago I used to have an IDE template that was like
// TODO ($username $date)
It was great and a central part of my work flow. I'd just leave them in the code whenever I saw something that needed fixing or could be improved, and every once in a while I'd just grep for this string and get my todos.
Some stuff was urgent, some was to be fixed later. Some never got fixed, but because it was all timestamped, it was easy to identify and clear those out. Also saved my coworkers the git annotate archeology to figure out who wrote the TODO and if they wanted additional context.
This flow felt extremely productive. Being able to just drop these when you think of something reduces the context switching. It also becomes an implicit bookmark as to the code the remark applies to (unlike jiras that are typically not well integrated into the code). Creating a jira is either an expensive context switch, or something you postpone and typically forget when you're done with the task and even if you don't, you've wasted a lot of cognitive energy keeping this in the back of your head all day.
Moved to a project that was incompatible with the IDE I was using. The new IDE lacked solid support for ad hoc named templates and provided no suitable tools for monitoring grep patterns.
I like both at the same time. A TODO and an issue, and the Todo has a link to the issue, and the issue had a permalink to the TODO line. Sometimes the TODO gets done during a refactor or otherwise, so having the comment with a link ensures the corresponding issue gets closed.
Yeah, this is how most of the teams I've worked on have done things. `TODO` is only allowed when accompanied by a link to a ticket, and people are expected to (and do!) enforce this when code reviewing.
It loses too much context in Jira. Could stay permanently beyond the Jira event horizon (since more tickets flow into the Jira than flow out). Or it could get culled in a grooming session ("Haven't seen the bug happen in a while. If someone complains we'll reopen").
Some things can’t be done right now. Maybe the implementation depends on something else that isn’t done yet (perhaps by someone else). And maybe I know exactly where I need to do the work. Then a todo in the comment with a ticket id is better than just a ticket with a completely informal hyperlink to the place in the code where the comment would have been.
Hmmm, I can see why you're doing this. But if you don't trust TODO comments to be useful and/or addressed in a timely manner then having a bunch of Jira tickets won't help you either.
I've made the exact same experience you describe with both tickets and comments. Ancient tickets, somewhere deep in the backlog with cryptic descriptions. Or even better, high prio bugs that never get picked up by the teams. Same with TODOs and FIXMEs.
In both cases it's probably a good idea to just remove the perceived issue after a while. Backlog tickets and todo comments are not like wine, they don't age well. IMO they behave more like compost heaps. :-)
This is why I use FIXME exclusively and always include a constructive description of what needs to be fixed. I sweep through the code before a commit to resolve easy FIXMEs and minimize what ends up in the repository.
I actually wrote "todoinator" that parses out TODO comments from code and presents them to you as a snagging list - we should definitely keep the comments in the code - ideally everything is in code not JIRA where it really just gets lost or fought over by non-coders (treat anything that is not code as "artists impression" of future state).
I like the suggestion to convert a TODO comment into a JIRA action, or supplement the former with the latter. This permits paying attention only to the TODOs, and to discuss them and to prioritize them without first having to run a monster grep job.
This article seems to imply that yak shaving is better. I know it's not saying that explicitly, but having all functions implemented in an optimal way before shipping requires some measure of yak shaving.
I would rather capture the assumptions the code operates under, so that when one of the assumptions is violated, the build pipeline throws an error.
My go to rule in engineering management is "we'll fix it later". I helps with my own anxiety about not having the best possible technical implementation and it helps ensure my teams don't feel totally stressed out with having everything perfect. If it works, we ship it. If it breaks, we iterate on it and fix it. The key here is to make time for things to be fixed. That is my second rule in engineering management.
Give yourself and your coworkers some grace. Most of us are not writing software that has to be 100% perfect otherwise someone dies.
So we're not lying to ourselves here. Most of the time we don't have to fix it later because it is good enough to get the job done. This mantra also helps when working with others that do things differently than you would have done. Does their implementation work? Yes. If it breaks or isn't 100% the best solution right now, we'll fix it later.
Also, saying "bad code" is highly subjective. I wouldn't call any code that is in production inherently bad. It just needs to be fixed later. :)
This is absolutely not the case for me and seems more individual to the person than a general fact.
Since as long as I can remember I have added #TODO: notes throughout my code / scripts with comments on what could be improved.
Have I fixed all of them? Of course not. Have I fixed or replaced most of them? Yes!
These days I have some automation that checks through my code repos and logs issues for any #TODO: comments with the link to the line which helps me keep track of them. It works well.
I know of people that aren't interested in fixing their code after the fact - generally in my experience they're either: A) Not motivated / interested in or energised by the idea of optimisation/performance. B) Not "allowed" to work on non-features (by a product owner / managers etc...). C) Simply forget about it due to lack of a tracking system or related exposure.
Using an absolute like "never" is surely a good way to generate conversation. That said, it's probably more likely you won't go back and fix it, but I have plenty of recent examples where I have committed myself to going back and refactoring things which have been and would likely continue to work for some time, if not "indefinitely" (read: for the life of usage). Depending on what it is, sometimes the mental burden of working in a system full of corner-cutting decisions is worse than the burden of rewriting it.
Or, perhaps, I get a large hit of dopamine from refactoring poorly-thought-out systems.
"Fix it later", "TBD", "TODO" code should be treated as non-foundational scaffolding. Sometimes you've got to erect a temporary structure in order to get the ball rolling on building the rest of the house. Just remember to tear it down at the end.
I mean it’s usually not something that sees a lot of change. Do you really need to go fix it if it’s been fine for 2years?
If something isn’t changing often and thus not looked at often, I’d argue that maybe that effort is best spent on more core parts of the system.
Yeah it’s a bit sad that the logging service isn’t running at max capacity but no one really cares. It’s fine
> “That’s a hacky way of doing this, but I don’t have time today to come up with a better implementation”.
> It got me thinking about when this “hack” might be fixed. I could recall many times when I, or my colleagues, shipped code that we were not completely happy with (from a maintainability / quality / cleanliness aspect, sub-par functionality, inferior user experience etc.). On the other hand, I could recall far far fewer times where we went back and fixed those things.
It really only comes down to understanding. Understanding the implementation's choices of abstractions, the domains that are being handled, and the concepts within those domains that the implementation should really be abstracting. When someone does eventually figure this out piece by piece, that's the time that these things can be cleaned up neatly. In the meantime we can, without deep understanding, sometimes still do mechanical refactoring to simplify the situation. It's okay to not understand this all when you ship a temporary hack to production. It's not IMO okay to accept that it will never get figured out, perhaps by someone else, after you're gone, but we should still try.
We tend to overestimate our Future Selves: They will have more time, they will be less tired and more motivated and generally be a better version of our current selves. This is why we trust them to implement the New Year Resolutions we've come up with.
However, the opposite is true - and the article touches some aspects of this. We're under similar constraints, and additionally, we have forgotten a lot of what we had in mind while writing the code, having to re-learn it before we start working on the fix.
What makes things better (besides "fixing it now", of course) is being nice to our Future Self and leaving them with enough context to get started again on the issue should they find the time to do it. This also applies to future co-workers, of course.
The opposite is also true: We tend to underestimate the abilities of previous authors of a specific piece of code (sometimes including ourselves) because we do not see the constraints they have been under while producing it.
I dislike this attitude, as far as I'm concerned it just helps perpetuate the environments that don't allow engineers to actually fix things.
It's related to the mindset I've seen that prototyping is a waste of time. Out of fear that you get trapped with it. Instead there is this expectation that you just design something close to perfection without any prior attempts.
I think this is perfectly natural - some ideas never manifest or are convincing enough to publish, or sometimes you write code and it turns out not to be used - why produce production ready code in this case. I have a python package that I slowly developed over 5 years, step by step [1]. Everytime I use it, I find many things that I could develop, some I do right then, others I leave for later. I also have a blog [2] - you can see three dates for each blog post:
- the time I first started working on it
- the first time I published it
- the last time it was updated
All of these dates are important. Between start of writing and publishing, easily a whole year can pass. Think of doing things more like a process of chained events, not like a one-stop thing.
Although I sorta-kinda agree with the sentiment, I disagree with the pessimistic conclusion (for a change!). I don't believe the alternatives are "fix it now or never". And we can structure our workflow to deal with this better.
In my current team whenever this comes up, if the hack really needs to be shipped now and fixed later, we create a ticket describing the task in our queue. Periodically we timebox periods of time to go through the backlog and fix these little annoyances.
It's not perfect, but it brings the "never" to a more realistic timeline - at least for the most important fixes.
This is the main disadvantage of remote work IMO. All those little things you think of while writing a solution gets lost in the "I'll do that later" sphere. In comparison, in-person programming in a team provides no excuse. Even if we say we'll do it later, it's not on one person to remember it. The thought gets thrown in the air and caught by someone else on the team. When I'm working alone, the thought gets thrown up but unless I catch it (which I never do), it floats away.
I always think about teams as on a sliding scale from "can rely on informal communication" to "cannot rely on informal communication"
A team of 3 people all in the same office can rely entirely on informal communication. A team of 1000 people distributed throughout the world cannot at all.
I find it very strange that most of the comments are about how this is okay. I don't think it is, because it prevents software evolution and impedes maintenance. My comment mostly goes to the "hacky" way of implementing something rather than a missing feature/missed performance target. That may indeed never come - though if you have a classic N+1 select problem that is fine now, it's probably good to tag it in the comments, you'd be surprised how quickly it will come.
However I have been burned a number of times by a so-called "quick fix" that looks like a bad idea in code review and then later prevents the evolution of the system in some way. Code that is "hacky" usually leaks abstractions, doesn't compose with anything else and only benefits the author at the time it's written as a "quick win". Plus the original author knew enough to call it a "hack" but did not explain why they could not do it properly - a person that consistently does this is not a person you can trust on your team.
It is probably up for debate what the balance is (think time value of money but for technical debt), but in the industry it is clearly the wrong side of the scale, and this is a major drag on productivity.
"either fix it now, or be OK with it never being fixed" - this is in fact the decision that is being made, but saying "later" allows everyone to pretend that it isn't, and do what's actually the right thing (accept "good enough" as good enough) instead of letting perfectionism demand that a perfect solution must take priority over everything else.
The article skims over the fact that a lot of "fix later" problems will never have to be fixed. They might be horrible hacks, but then time will show that no other code will depend on it and it works.
The mongodb example he takes is also peculiar. If it was a "fix later", as I understand the article they should have replaced mongo with a relational database very early on. That might also have tanked the company, if they then had to spend their time fighting schemas.
If you commit a "fix it later" hack into the codebase, you need to put a comment in the code and explain what's wrong with it. That'll increase the chances that people at least know about your hack code before they start depending on it.
It's expensive to fix problems that don't need to be fixed. But then again, it can also be an order of magnitude (or two or three!) more expensive to not fix a problem that should have been fixed.
Just felt the need to rant about it now, because I wouldn't do it later :)
2. In most cases, things which never get fixed are good enough "for now"
3. The fixme comment generally means I maintain the flexibility of the planned design, without the complexity/implementation cost. I won't make changes which break a planned fixme.
4. The most common time to fix things is when I need that flexibility.
A good example is if I'm planning to make something modular and pluggable. The fixme means I never tie other code to the specific current implementation, even if there's only one. On the other hand, I don't build out the module loader until I actually need it. 90% of the benefit of modules for 10% of the cost.
This is going to happen. The best way to handle this inevitable occurrence is to ticket this. The reviewer should ask for the ticket, and have the ticket linked. The ticket should ready to go, and the author of the code should have it ready to go as the "next thing" to get done, whether that's the follow up, or it's getting picked up in the next sprint so it doesn't get missed.
The code reviewer should also watch that ticket, and be prepared to assist it pushing it through.
If you just leave the task in the code, it will get lost. And, on occasion, the pressure of needing to write a ticket will get someone to just make the fix instead of having to create the ticket.
My experience is the precise opposite. Engineering tickets with no clear business value rot by the hundreds in sine corner of the backlog, at best being brought up occasionally as an "I told you so" when the business case does rear its head.
In contrast, people working on the code for other reasons will notice a TO-DO and be much more likely to bring it up today, as some kind of "if I'm going to do changes in that area, I should also address this TODO, it will just take an extra $days". Or at least, it will guide them to the source of a long forgotten bug.
My experience matches yours, and I wanted to call that out with more than just an up-vote. The ticketing system is where rework goes to die.
In an engineer-led environment without a project management class calling the shots, maybe I'd feel different, but whoever is ordering the backlog is always going to prioritize features and fires, because that's what drives their own career advancement.
The analogy that the author presents with the disassembled bed frame on the floor also applies to non-software projects. For things that are small to medium sized pieces of physical infrastructure, data cables, electrical wiring, etc, the "we'll fix it later" mentality and normalization-of-sketchy-hack-jobs also applies.
Look at any mid size enterprise rack cabinet of servers, switches, routers and stuff where the cabling is an absolute nightmare with no labels on anything and years of stuff accreted on top of years of previous stuff. It never gets fixed until something catastrophic happens.
- Have a development plan with lists of features/problems to be released/fixed in each version
- Create an issue for each of the TODOs (and link it in the code comments)
- Assign all issues to a particular version milestone
- Actually follow the development plan
If a TODO is not worth fixing, you will find that out during the project budget evaluation stage before you start working on issues. Although your client may decide not to address the TODO, it will not bother you anymore, because the decision will be on them.
There is one more reason why it's unlikely you fix the hack in the future: unconsciously you feel like / 'admit' you can't do it now, otherwise you would. The obstacle might be the lack of a clear picture of how to proceed, a bad association with the topic or anything else that keeps you from doing it now.
So committing a hack can be seen as a symptom of having a reason not to do it now - this reason very likely will still apply `tomorrow()`.
I think neither extreme is right. Tech debt can be a deliberate tradeoff decision. I also understand that in reality, unless managed, tech debt is rarely considered by non-tech people.
I've written about how to reason about technical debt: https://leadership.garden/tips-on-prioritizing-tech-debt/
It's like that "broken window" anecdote from The Pragmatic Programmer's Guide...
A neighborhood starts to suck once there's a broken window no one is fixing. You see the broken window and after a while you just start to think "what a lousy neighborhood". So, if you have a broken window, and you care about it, you should go out of your way to fix it.
Fix the leak in the toilet. Get the bad tooth filled. Get your self tests running again.
A common issue of having a majority of "feature oriented" engineers on your team is that they block any effort to improve the existing codebase, so while you're cranking features out, the code slowly rots and technical debt creeps in.
The counterpart of this is having a majority of "perfection oriented" engineers, in which you have top notch processes and code, but with endless yak shaving and little product work being done :)
I like how this implies the existence of 'load-bearing todos'.
Not the more famous load-bearing bug, where other components are built expecting the bad behavior, but mini implementations of the needed behavior elsewhere in the system.
If the todo had been done, those mini imps (probably) wouldn't exist. But now that they do, completing the todo is less necessary / actively harmful.
I just bought a SaaS a few months ago which was initially built as a MVP.
It was full of these "I should fix that later" kind of comments - they are in nearly every file...
I now add new features and rewrite parts of the code from time to time but to be honest, some parts of the product should have been built in a completely different way from the beginning.
This makes me wonder, how many Startups build equally ugly MVPs...
This has a lot of truth in life in general, not just in software. We spend so much of it expecting to fix certain things in our relationships, our physical environment, our society - later. But then you grow older, and you realize you'll have a lot less "later"'s available for doing anything than you may have imagined.
If your code is sufficiently encapsulated in a way that it's easily deletable and therefore easily replaceable it's perfectly fine to mark it as "fix later".
Now, the two examples in the article, not doing data validation server side and starting with Mongodb are not "fix me later" anyway, they're professional mistakes.
Hmm. I haven't coded that much in a while, however I went back and refactored fixme's all the time, whenever it made sense and I had to build something on top of the poorly structured code that would bake in the defect.
That said, if you're just writing sloppy code out of laziness, you probably won't ever care enough to fix it.
There should be an intensive to fix hacks. If you approve a PR with a hack there is no intensive to fix it later. Why? The code is already in. But if you don't approve it, block it, until the hack gets fixed, then there is no choice for the author of the hack. Either he fixes it, or his code doesn't get in.
Sometimes you come back to an area to fix a bug or whatever, and it's useful to know that the hack wasn't absolutely necessary, just a matter of time constraints - it may be that moving to the proper solution is the easiest fix for your problem, and you get a bit of extra confidence that it won't break something else.
At the very least write an explicit unit test for whatever it is. If someone breaks it, at least it will be on your landmine and not at deploy or worse. If you setup a potential for improvement, protect the functionality and ensure it is successfully the next persons problem, but can situationally identified in the future.
Depends on what kind of work it is - if it's experimental, one time exploration, it's ok to be hacky. If it's a product intended for release and has a long life or intensive compute usage, then it's better to spend the time to make it right. Don't do premature optimisation!
Videogames are a good example. Ship it broken (knowingly testers actually catch all the bugs but the suits upstairs don't give a fuck).
If the game sells you'll keep a few sad sack developers working on patches for a few months but otherwise move on to the next HYPE PRODUCT cycle.
Generally I see people never fix "it" later, but often improve the systems which contains "it", there for replacing crappy thing with something a bit better.
I'm pretty cynical about coding but the article seems overly cynical to me.
I guess that's why the blog is called "useless dev blog" :)
I installed a wall mount TV 3 years ago. The holes in the bracket and stuff didn't line up with the overall wall dimensions nor the studs. It was about 1.5" off center and about 1/8" lower on one end.
I just got around to installing a better bracket so it's centered and level.
Pointing out that the solution is not perfect is fine. It helps future maintainers, even if self. The worst offenders don't even realize the badness of their implementations, intriguing further adventurers of their code's true intentions.
If you have time to fix the hack independent of modifying that code for a specific need then you should be asking what else you could be working on. Fix it later implies resource availability which is an alien concept for project managers.
$ git log | grep TODO
Fix warning from missing 'alt' on QR code icon, TODO is done
TODOs are done
Handle TODO
Remove old YC TODO
Dynamically list NFTs per TODO
TODO is now done
Comment on the PR that the author must file a bug detailing what has to be fixed later. Only works if you employ processes and practices that ensure bugs get appropriately reviewed and assigned and not lost to /dev/null
Counterpoint: it may be a complete waste of time to "fix it right" and the business may be able to sail on for the next 10 years without ever fixing it, and that may be absolutely fine.
Maintain a prioritised list of things to do and always work on #1 priority. In other words, don’t fix the TODO’s if it is not #1 priority. It’s that simple really.
If you never have to come back to “fix it”, was it actually wrong to begin with?
Personally, in my own career, I’ve found this “come correct” mindset is used to justify unnecessarily flexible solutions to allow for easier changes in the future … changes that 90% of the time, never actually materialize.
I held this mindset too when I was younger. Then I got tired of seeing how all my clever abstractions never actually got used the way I intended, and decided to get smarter about it.