Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Dealing with the perfectionism trap as a developer (jebraat.com)
40 points by jebraat on Aug 31, 2022 | hide | past | favorite | 36 comments


One of the biggest contributors, arguably, to the mediocrity of modern software is the rejection of the inner voice that says "you can (and should) do better" as "perfectionism." No offense to the author, but imo this is a platitude that has done far more damage than it has good.

A few quotes to contextualize the thought (ones that I reference when I try to weasel out of doing something properly):

“An important part of making good things—it’s extremely underrated in current day society—is having a critical eye for the things that you’re making and assessing ‘are these things of an appropriate quality, do these live up to the standard of what I’m trying to create?’ That’s why it’s not fun sometimes. But it’s also what enables it to be a peak experience in the end when you succeed.”

- Jonathan Blow

---

“Doing something quickly is not the same as doing it well.”

- John Hegarty

---

“Any suppression is regression.”

- Kapil Gupta

---

“If you walk the path of mastery, you’re going to have to walk it all alone. There is no one who is going to accompany you because no one has been taught that. No one believes that. That’s not in the water and in the food and in the air. You’re basically going to have to breathe a different sort of oxygen.”

- Kapil Gupta


One can perfect the software without perfecting the code.

A bad overall architecture is unmaintainable.

An ugly 30 line pure function that has unit tests and hasn't needed changes in 5 years is just meh. 50 of those ugly pure functions are still not going to break a project. They could be untangled at any time as needed.

50 decent but not great functions are definitely not going to break a project.

So much perfectionism is at the level of functions and variable names. Some makes things worse, when people reinvent nontrivial functionality instead of using widely trusted libraries, like as if work projects are their personal playground of weekend educational stuff.


On the other hand most engineers envision "improving things" as refactoring, applying code gymnastics, having more complex architecture etc. despite I/O being the most common culprit (bad/missing indexes, dumb queries, high cardinality...)


From what I've seen a lack of testability is the problem with most bad architecture. Testability seems to have a high correlation with being understandable.

If gymnastics make the code easier to understand and prove things about, then we should limber up.


Geez, that mastery quote hits Haaaard. I'm in the process of abandoning focus and priorities to make something great. I'm not done, but I have found the process both healing and instructive.

The author is going through the management phase of the career where they have other stakeholders to contend with (or their own constraints), so regressing is an appropriate.

https://www.adama-platform.com/2022/07/02/the-path-of-the-mo...


Perfectionism is a pitfall, but agile development has led a lot of developers to prize velocity over correctness. Many seem to believe that any upfront mistake can be fixed through iteration. The truth is that there are things that you can't fix through iteration. Things like having an incorrect data model. Some things need to be perfect up front or you will quickly find yourself boxed in down the road as you try to extend the system. Once you've gotten the right design at the center of your system, understanding not only the current requirements but also likely future requirements as well, scrum away. It's just not OK to blow the "big picture" decisions that you need to do in order to build a piece of software the right way because you want to move fast.


Incorrect is a spectrum, and perfect is not-- there's a balance to be struck. In some projects, having a perfect data model would require having a perfect design, and I don't know if I've seen that happen once (for anything non-trivial) in my entire career. Depending on the environment and requirements, data model changes aren't even necessarily problematic, let alone catastrophic. You might not even know what your complete data model will look like until further along in the project. In other situations, having to make changes to some base layer of interface functionality could end up being the concrete shoes when it comes to developer time sinks.

The impulse to get the instant gratification of cracking into those first few lines of code should never supplant thoughtful design, but I've seen things go really wrong in the opposite direction, too. I reckon it's all about doing a good risk analysis, considering the costs of having to rework something down the road, and not letting your fear of getting that wrong stop progress longer than it needs to.


When designing a complex system with multiple business modules and multiple services, correctness becomes a little more binary than that. Your data model tends to proliferate across anything that interacts with it - fixing it may require a complete refactor of all of those modules and all of those services. Throw in a process that is constrained by pesky things like needing to preserve WLB, limited surge capacity, and competing priorities, as well as varying levels of experience on your team, this rewrite can potentially take multiple years and require the maintenance of two parallel systems, and yet it must occur because the data model is foundational for everything else. Continuing to build on the wrong data model will keep boxing you in further until it's too late to ever fix it, with every new feature forcing you to fight with a fundamental flaw of your system. The resulting workarounds and hacks will slowly strangle your business logic until it's too complex to change with any confidence. It's kind of like building a house on top of quicksand.

When faced with such constraints, it's really, really important to get this right from the beginning.


Sure. I never said that data models needn't ever be perfect from the jump or that having a perfect design from the beginning would always be required. It really depends on the data complexity, structure, and how it's used. The important thing is to really think about it first.


I like this comment so much, I'm tempted to add a quote to the article


> The truth is that there are things that you can't fix through iteration. Things like having an incorrect data model.

Yes! I spent a year iterating on my prototype for my startup for precisely this reason. The data model kept changing and that had repercussions through every other part of the prototype. Often leading to reworking entire components.

The data model is that the core of everything for my prototype, so it was more than worth the time spent.


Indeed, this is precisely what makes getting it right up front so important. As mentioned above, the data model tends to proliferate across anything that interacts with it. The principles of normalization mean that there is usually one natural way to interact with a given conceptual entity, given current and potential future requirements, but thousands of ways to box yourself in. Getting it wrong can mean the wrong assumptions about the data shape propagate across thousands of lines of code, preventing the possibility of ever implementing certain features in a clean, maintainable way.


> there are things that you can't fix through iteration

> Yes! I spent a year iterating...for precisely this reason

Seems like you're presenting a counter-point while simultaneously agreeing.


I think the takeaway is "a year." Businesses typically don't have a year to fix something like this, so saying something will take a year usually means, "It's impossible."


Totally, but this is a full greenfield project. It's a pretty unusual situation to be in.

It definitely a first for me.


You deliberately omitted the word "prototype" from the quote. They didn't spend a year iterating in production. The prototype is the one you plan to throw away.


> Things like having an incorrect data model.

Absolutely. Take a look at git. It's basically just a Merkle tree which is an ingenius data structure with a very simple set of entities and invariants. Almost every feature of git can be viewed as merely manipulating this structure. If you rolled your own, you may get a fast mvp, but chances are that your data model will be very inefficient or complex for coming features.

This goes both ways. Git doesn't have eg tracking of empty dirs, as a result of the data model (to my understanding). Saying no when it doesn't fit the model is also an important aspect.

As such, the data model can't be decoupled from almost anything else, including the UI. It's well worth spending some serious time researching and planning that. Or, if you're still not confident, you can always prototype a quick-and-ugly POC which contains your main features – that usually helps inform and build intuition about the problem.


Anything can be fixed with iteration. Some just take more iterations than others. We have the benefit of working with text files instead of wood, granite, or the human body.


Right, but in the constraints of a business, "more iterations" may (and probably does mean) "impossible." Second, the iterations to fix the problem block all your iterations to add new functionality - because the data model is foundational for everything else.


"Make it work, then make it beautiful, then if you really, really have to, make it fast. 90 percent of the time, if you make it beautiful, it will already be fast. So really, just make it beautiful!" – Joe Armstrong

As hard as it is to resist, I would argue that if you other modules to make work, move on and leave making the current module beautiful for later.

That said it's really hard to leave working but ugly code alone

I find a lot of freedom in coding for myself. In other words code that no one else is ever going to see. This goes a long way to silencing my inner critic. If I ever want to open source it, I can make it perfect at that time. This is why I don't keep my code on github. When I write code for myself I can express myself the way I want to, which is not necessarily the "standard way" that someone later might a "code smell"


Code that serves as a highway I try to make as clean as possible. Code that serves as a cul-de-sac just has to work.


Thanks for the comment Michael!

I 100% understand the "it's really hard to leave working but ugly code alone" sentiment", as I get the same feeling a lot of the time. That is one of the reasons I wrote this article.

I often have to fight myself to move on to do work on other things.


Other programmers can be harsh critics and it's important not to internalize these critics. This isn't a knock on other programmers, but the criticism is often based on their own perfectionism!


Those "other programmers" that are "harsh critics" are often ourselves. How many times have you (any programmer) looked at your own old code and thought, "what they hell was I thinking?"

I can't count the times I've looked at ugly code, fired off `git blame`, and found out it was me.


My experience has often been feeling that something wasn't right and trying to fix it at a low level - variable or function names, length of lines and expressions, organizations in files, things like that. Often I would clean up things superficially but the feeling that something is wrong remains.

Sometimes I come up back after years, having learnt how to organize certain kinds of programs better, and can instantly see that the problem was much deeper than that. I now think that as a rule of thumb, if it is hard to make a piece of code tidy at a superficial level, there is a deeper underlying issue. If I manage to find a good architecture, I try to keep the bad code as minimal as possible instead of investing more time in a dead end.


> Treat digital products like ever-changing drafts

Sorry, but no. As a user, this is the bane of “modern” software.


I get where you are coming from. I don't think anybody wants to use bug riddled software. I also think some companies take it too far with releasing software to the general public (not alpha/beta) that isn't refined enough. I suppose corporate pressure to release things on arbitrary deadlines also doesn't help.

IMO the beautiful thing about software is, that we can push updates and improvements pretty much instantaneously. Whether that is adding features to a working product, or fixing bugs that slipped through during development.


It’s not primarily about bugs, it’s about everything constantly changing, from UI, look&feel to how features work. This is a constant cognitive load to users, a perpetual need for them to readjust to the changing software. As a result, the software never feels finished, doesn’t feel solid, and often appears not to be well thought-out. This gets in the way of the “it just works” the user wants, in a major way.

The argument is not against enhancing and improving the software, it’s against delivering a “draft” (as you say) to end users, it’s against constantly changing the design and working of features. Delivering a “draft” (or prototype) is only acceptable for beta users, or in an initial design phase involving the future users in the design process. It is not acceptable for released software intended for productive use.

Users want stability. They want the software to get out of their way as much as possible, to be reliable and unsurprising, to remain working in the familiar way.


What if the requirements change?


That has barely anything to do with treating delivered software as unfinished drafts.

If requirements change, then this has to be carefully coordinated with the affected users, in particular if they aren’t the ones defining the requirements, or if the user base is not uniform and the requirements may only change for a portion of them. In any case, requirement changes affecting how users operate existing functionality shouldn’t be a frequent occurrence.


> IMO the beautiful thing about software is, that we can push updates and improvements pretty much instantaneously.

It is, but that directly results in garbage software being released, because „we can fix problems later“. And it also means you can now perfectly balance perfecting your software with providing the resulting support for your pile of garbage. Which I guess is the reason I have to call support every third time im I’m trying to use an EV charging station because their software shit the bed again. </rant>


As long as they are honest about the current state of it. Often, I rather have something that I can struggle along with than nothing at all.


To me, it's really context-dependent and when working on a solution, you may need to take into account the business objectives and make the right trade-off in terms of time spent and "quality" of the design.

Architecture-level designs are important to get right as they can constrain your future self's ability to scale or adjust how the product works. However, it's also possible to spend way too long making a very open-ended architecture whose features you never actually use. (It may turn out you "build it to throw away" if you learn even more about the space you're working in as you go along.)

If you're working on a product and don't even know if it's something customers would use or want, I would say erring on velocity is more important than necessarily getting everything "right" in the design or even making the code beautiful. The thing that will most likely "go wrong" for you is that you built a product nobody wanted and not that the code just wasn't perfect enough. Better to get it in front of people sooner so you know if it's worth pursuing. Obviously, making that nuanced trade-off between "high quality" and "good enough" depends on your skill level and being able to recognize where things could blow up in your implementation if you do get it wrong.

Writing code is not done in a vacuum. We don't get to just write the most beautiful, optimal things with endless amounts of time. There are real business constraints and objectives we need to meet, and the trade-offs we make in terms of time spent should take the context into account. Unfortunately, most of are so separated from the real business objectives of the company leaders at the top, making it difficult to make the appropriate trade-offs (or even caring one way or the other). It often simply turns into "My boss wants this done in 2 weeks, but I would like to do it right and that will take at least a month."


Coding whatever first ugly solution you come across is horrible; taking days deciding whether to use pattern A or pattern B for something that probably will change in a few months is horrible too.

Find the balance between thinking of a better way, and the time it takes to coding it.

Personally I've found codebases that could be simplified enormously just by using a different library, which I found just by searching a few minutes before touching it. You and future developers will probably be grateful.


In my time as a developer I have never dealt with this problem. Usually the opposite poorly written code, written as an mvp that drags us down. Usually by some “X10’er” Cowboy who has long ridden off into the sunset.


The author is not wrong. But when you scale up is when you pay the price for expediting processes earlier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: