Hacker News new | past | comments | ask | show | jobs | submit login

When I first heard the maxim that an intelligent person should be able to hold two opposing thoughts at the same time, I was naive to think it meant weighing them for pros and cons. Over time I realized that it means balancing contradictory actions, and the main purpose of experience is knowing when to apply each.

Concretely related to the topic, I've often found myself inlining short pieces of one-time code that made functions more explicit, while at other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on. In both cases I was creating inconsistencies that younger developers nitpick -- I know I did.

My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode) and like to be able to treat rules as guidelines. The trouble is how can you scale this to millions of developers, and what are those limits of the human mind when more and more AI-generated code will be used?




I had exactly this discussion today in an architectural discussion about an infrastructure extension today. As our newest team member noted, we planned to follow the reference architecture of a system in some places, and chose not to follow the reference architecture in other places.

And this led to a really good discussion pulling the reference architecture of this system apart and understanding what it optimizes for (resilience and fault tolerance), what it sacrifices (cost, number of systems to maintain) and what we need. And yes, following the reference architecture in one place and breaking it in another place makes sense.

And I think that understanding the different options, as well as the optimization goals setting them apart, allows you to make a more informed decision and allows you to make a stronger argument why this is a good decision. In fact, understanding the optimization criteria someone cares about allows you to avoid losing them in topics they neither understand nor care about.

For example, our CEO will not understand the technical details why the reference architecture is resilient, or why other choices are less resilient. And he would be annoyed about his time being wasted if you tried. But he is currently very aware of customer impacts due to outages. And like this, we can offer a very good argument to invest money in one place for resilience, and why we can save money in other places without risking a customer impact.

We sometimes follow rules, and in other situations, we might not.


Yes, and it is the engineering experience/skill to know when to follow the "rules" of the reference architecture, and when you're better off breaking them, that's the entire thing that makes someone a senior engineer/manager/architect whatever your company calls it.


Your newest team member sounds like someone worth holding onto.


> My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode) and like to be able to treat rules as guidelines. The trouble is how can you scale this to millions of developers, and what are those limits of the human mind when more and more AI-generated code will be used?

I think the truth is that we just CAN'T scale that way with the current programming languages/models/paradigms. I can't PROVE that hypothesis, but it's not hard to find examples of big software projects with lots of protocols, conventions, failsafes, QA teams, etc, etc that are either still hugely difficult to contribute to (Linux kernel, web browsers, etc) or still have plenty of bugs (macOS is produced by the richest company on Earth and a few years ago the CALCULATOR app had a bug that made it give the wrong answers...).

I feel like our programming tools are pretty good for programming in the small, but I suspect we're still waiting for a breakthrough for being able to actually make complex software reliably. (And, no, I don't just mean yet another "framework" or another language that's just C with a fancier type system or novel memory management)

Just my navel gazing for the morning.


I think the only way this gets better is with software development tools that make it impossible to create invalid states.

In the physical world, when we build something complex like a car engine, a microprocessor, or bookcase, the laws of physics guide us and help prevent invalid states. Not all of them -- an upside down bookcase still works -- but a lot of them.

Of course, part of the problem is that when we build the software equivalent of an upside down bookcase, we 'patch' it by creating trim and shims to make it look better and more structurally sound instead of tossing it and making another one the right way.

But mostly, we write software in a way that allows for a ton of incorrect states. As a trivial example, expressing a person's age as an 'int', allowing for negative numbers. As a more complicated example, allowing for setting a coupon's redemption date when it has not yet been clipped.


John Backus's Turing Award lecture meditated on this idea, and concluded that the best way to do this at scale is to simply minimize the creation of states in the first place, and be careful and thoughtful about where and how we create the states that can't be avoided.

I would argue that that's actually a better guide to how we manage complexity in the physical world. Mechanical engineers generally like to minimize the number of moving parts in a system. When they can't avoid moving parts, they tend to fixate on them, and put a lot of effort into creating linkages and failsafes to try to prevent them from interacting in catastrophic ways.

The software engineering way would be to create extra moving parts just because complicated things make us feel smart, and deal with potential adverse interactions among them by posting signs that say "Careful, now!" without clearly explaining what the reader is supposed to be careful of. 50 years later, people who try to stick to the (very sound!) principles that Backus proposed are still regularly dismissed as being hipsters and pedants.


I'd say that the extra moving parts are there in most cases not because someone wanted to "feel smart" (not that it doesn't happen), but to make the pre-existing moving parts do something that they weren't originally supposed to do, because nobody understands how those pre-existing parts work well enough to re-engineer them properly on the schedule that they are given. We are an industry that builds bridges out of matchsticks, duck tape, and glue, and many of our processes are basically about how to make the result of that "good enough".


To determine what states should be possible is the act of writing software.


I don't think we will ever get the breakthrough you are looking for. Things like design patterns and abstractions are our attempt at this. Eventually you need to trust that however wrote the other code you have to deal with is sane. This assumption is false (and it might be you who is insane thinking they could/would make it work they way you think it does).

We will never get rid of the need for QA. Automated tests are great, I believe in them (Note that I didn't say unit tests or integration tests). Formal proofs appear great (I have never figured out how to prove my code), but as Knuth said "Beware of bugs in the above code; I have only proved it correct, not tried it". There are many ways code can be meet the spec and yet wrong because in the real world you rarely understand the problem well enough to write a correct spec in the first place. QA should understand the problem well enough to say "this isn't what I expected to happen."


I suppose that depends on the language and the elegance of your programming paradigm. This is where primitive simplicity becomes important, because when your foundation is composed of very few things that are not dependent upon each other you can scale almost indefinitely in every direction.

Imagine you are limited to only a few ingredients in programming: statements, expressions, functions, objects, arrays, and operators that are not overloaded. That list does not contain classes, inheritance, declarative helpers, or a bunch of other things. With a list of ingredients so small no internal structure or paradigm is imposed on you, so you are free to create any design decisions that you want. Those creative decisions about the organization of things is how you dictate the scale of it all.

Most people, though, cannot operate like that. They claim to want the freedom of infinite scale, but they just need a little help. With more help supplied by the language, framework, whatever the less freedom you have to make your own decisions. Eventually there is so much help that all you do as a programmer is contend with that helpful goodness without any chance to scale things in any direction.


> protocols, conventions, failsafes, QA teams, etc, etc that are either still hugely difficult to contribute to (Linux kernel, web browsers, etc)

To be fair here, I don't think it's reasonable to expect that once you have "software development skills" it automatically gives you the ability to fix any code out there. The Linux Kernel and web browsers are not hard to contribute to because of conventions, they're hard because most of that code requires a lot of outside knowledge of things like hardware or HTML spec, etc.

The actual submitting part isn't the easiest, but it's well documented if you go looking, I'm pretty sure most people could handle it if they really had a fix they wanted to submit.


There are multiple reasons that contributing to various projects may be difficult. But, I was replying to a specific comment about writing code in a way that is easy to understand, and the comment author's acknowledgement that this idea/practice is hard to scale to a large number of developers (presumably because everyone's skills are different and because we each have different ideas about what is "clear", etc).

So, my comment was specifically about code. Yes, developing a kernel driver requires knowledge of the hardware and its quirks. But, if we're just talking about the code, why shouldn't a competent C developer be able to read the code for an existing hardware driver and come away understanding the hardware?

And what about the parts that are NOT related to fiddly hardware? For example, look at all of the recent drama with the Linux filesystem maintainer(s) and interfacing with Rust code. Forget the actual human drama aspect, but just think about the technical code aspect: The Rust devs can't even figure out what the C code's semantics are, and the lead filesystem guy made some embarrassing outbursts saying that he wasn't going to help them by explaining what the actual interface contracts are. It's probably because he doesn't even know what his own section of the kernel does in the kind of detail that they're asking for... That last part is my own speculation, but these Rust guys are also competent at working with C code and they can't figure out what assumptions are baked into the C APIs.

Web browser code has less to do with nitty gritty hardware. Yet, even a very competent C++ dev is going to have a ton of trouble figuring out the Chromium code base. It's just too hard to keep trying to use our current tools for these giant, complex, software projects. No amount of convention or linting or writing your classes and functions to be "easy to understand" is going to really matter in the big picture. Naming variables is hard and important to do well, but at the scale of these projects, individual variable names simply don't matter. It's hard to even figure out what code is being executed in a given context/operation.


> Yet, even a very competent C++ dev is going to have a ton of trouble figuring out the Chromium code base.

I don't think this is true, or at least it wasn't circa 2018 when I was writing C++ professionally and semi-competently. I sometimes had to read, understand and change parts of the Chromium code base since I was working on a component which integrated CEF. Over time I began to think of Chromium as a good reference for how to maintain a well-organized C++ code base. It's remarkably plain and understandable, greppable even. Eventually I was able to contribute a patch or two back to CEF.

The hardest thing by far with respect to making those contributions wasn't understanding the C++, it was understanding how to work the build system for development tasks.


Also agree that the example code base is not the best example to use.

The Chromium code base is a joy to read and I would routinely spend hours just reading it to understand deeper topics relating to the JS runtime.

Compared to my company's much smaller code base that would take hours just to understand the most simplest things because it was written so terribly.


That's true, and fair point for the example not being the best one. It was several years ago that I was poking at the Chromium code base to investigate something. I don't honestly remember much about the code itself, but I do remember struggling with the build system like you said. And that's probably why I just remember the whole endeavor as being difficult. Though, the build system being so complicated is not totally irrelevant to my point... Understanding how to actually build and use the code has some overlap with the idea of understanding the code or project as a whole.


I guess I just don't really get your point then, it's not like the Linux Kernel or Chromium or Firefox are giant buggy messes that don't work at all. They certainly have bugs but by-and-large they work very well with minimal issues for most people. I also think their codebases are pretty approachable, IMO A competent C or C++ developer can definitely read the code from either one with a little effort - It's not the easiest thing but it's definitely not impossible, most people just don't ever try.

My point was that making meaningful contributions such a big fixes requires understanding how the code is _supposed_ to function vs. how it actually functions, that's the hard part. In the majority of cases that's simply not something the code can tell you, there's no replacement for comparing the code to a datasheet or reading the HTML spec to understand how the rendering engine is supposed to work, and those things take time to learn. For the simpler parts people do actively contribute to those without tons of previous experience (or because they already have experience with a library or etc.).


> My point was that making meaningful contributions such a big fixes requires understanding how the code is _supposed_ to function vs. how it actually functions, that's the hard part. In the majority of cases that's simply not something the code can tell you [...]

That's kind of my point, though. I'm trying to zoom out and "think outside the box" for a minute. It's hard to compose smaller pieces into larger systems if the smaller pieces have behavior that's not very well defined. And our programming languages and tools don't always make it easy for the author of a piece of code to always understand that they introduced some unintended behavior.

To your first point: I'm not shitting on Chromium or Firefox or any other software projects, but they're honestly ALL "buggy messes" in a sense. I'm a middling software dev and the software I write for my day job is definitely more buggy, overall, than these projects. So, I'm not saying that other developers are stupid (quite the opposite!). But, the fact that there are plenty of bugs at any given point in any of these projects is saying something important, IMO. If I use our current programming tools to write a Base64 encode/decode library, I can do a pretty good job and there's a good chance that it'll have zero bugs in a fairly short amount of time. But, using the same tools, there's absolutely no hope that I (we, you, whoever) could write a web browser that doesn't have any bugs. That's actually a problem! We've come to accept it because that's all we've got today, but my point is that this isn't actually an ideal place to settle.

I don't know what the answer is, but I think a lot of people don't even seem to realize there's a problem. My claim is that there is a problem and that our current paradigms and tools simply don't scale well. I'm not creative enough to be the one who has the eureka moment that will bring us to the next stage of our evolution, but I suspect that it's what we'll need to actually be able to achieve complex software that actually works as we intend it to.


> I feel like our programming tools are pretty good for programming in the small, but I suspect we're still waiting for a breakthrough for being able to actually make complex software reliably. (And, no, I don't just mean yet another "framework" or another language that's just C with a fancier type system or novel memory management)

Readability is for human optimization for self or for other people's posterity and code comprehension to the readers mind. We need a new way to visualize/comprehension code that doesn't involve heavy reading and the read's personal capabilities of syntax parsing/comprehension.

This is something we will likely never be able to get right with our current man machine interfaces; keyboard, mouse/touch, video and audio.

Just a thought. As always I reserve the right to be wrong.


Reading is more than enough. What’s often lacking is usually the why? I can understand the code and what it’s doing, but I may not understand the problem (and sub problems) it’s solving . When you can find explanations for that (links to PR discussions, archives of mail threads, and forums post), it’s great. But some don’t bother or it’s somewhere in chat logs.


calculator app on latest macos (sequoia) has a bug today - if you write FF_16 AND FF_16 in the programmer mode and press =, it'll display the correct result - FF_16, but the history view displays 0_16 AND FF_16 for some reason.


> macOS is produced by the richest company on Earth and a few years ago the CALCULATOR app had a bug that made it give the wrong answers...

This is stated as if surprising, presumably because we think of a calculator app as a simple thing, but it probably shouldn't be that surprising--surely the calculator app isn't used that often, and so doesn't get much in-the-field testing. Maybe you've occasionally used the calculator in Spotlight, but have you ever opened the app? I don't think I have in 20 years.


I think this is backwards. A calculator app should be a simple thing. There's nothing undefined or novel about a calculator app. You can buy a much more capable physical calculator from Texas Instruments for less than $100 and I'm pretty sure the CPU in one of those is just an ant with some pen and paper.

You and I only think it's complex because we've become accustomed to everything being complex when it comes to writing software. That's my point. The mathematical operations are not hard (even the "fancy" ones like the trig functions). Formatting a number to be displayed is also not hard (again, those $100 calculators do it just fine). So, why is it so hard to write the world's 100,000th calculator app that the world's highest paid developers can't get it 100% perfect? There's something super wrong with our situation that it's even possible to have a race condition between the graphical effects and the actual math code that causes the calculator to display the wrong results.

If we weren't forced to build a skyscraper with Lego bricks, we might stand a better chance.


> That's my point. The mathematical operations are not hard (even the "fancy" ones like the trig functions). Formatting a number to be displayed is also not hard (again, those $100 calculators do it just fine).

Right, and that's my point: if all you want is a rock-solid computational platform, then you can use, for example, `bc`. (That's what I do.) I assume that Apple assumes that their users want something fancier than that, and it's there, with the fanciness of a shiny user interface on a less-exercised code path, that the bugs will inevitably come.


'bc' was first released literally half a century ago. If that is still the state of the art, I think it is absolutely fair to sound the alarm that something is VERY wrong with our modern software development practices. We shouldn't have to choose between "modern GUI" and "works".

(For what it's worth, Qalculate destroys both bc and the Mac calculator app in both command line and GUI categories, so making working software isn't entirely a lost art.)


Constantly, to keep the results of a calculation on screen. It's fallacious to assume that your own usage patterns are common. Hell, with as much evidence as you (none), I would venture that more people use the Calculator app than know that you can type calculations in Spotlight at all.


> It's fallacious to assume that your own usage patterns are common. Hell, with as much evidence as you (none) ….

I don't assume that my usage pattern is common. (My usage pattern is to drop to `bc`.) I assume that Calculator usage isn't common, but, recognizing that that is an assumption and that the only way to get evidence is to ask, I asked:

> Maybe you've occasionally used the calculator in Spotlight, but have you ever opened the app?

And you answered, so now together we have double the evidence that I alone did before. :-)


We've been there, done that. CRUD apps on mainframes and minis had incredibly powerful and productive languages and frameworks (Quick, Quiz, QTP: you're remembered and missed.) Problem is, they were TUI (terminal UI), isolated, and extremely focused; i.e. limited. They functioned, but would be like straight-jackets to modern users.

(Speaking of... has anyone done a 80x24 TUI client for HN? That would be interesting to play with.)


> has anyone done a 80x24 TUI client for HN

lynx still exists



Works a treat :)


I often Bang on about “software is a new form of literacy”. And this I feel is a classic example - software is a form of literacy that not only can be executed by a CPU but also at the same time is a way to transmit concepts from one humans head to another (just like writing)

And so asking “will AI generated code help” is like asking “will AI generated blog spam help”?

No - companies with GitHub copilot are basically asking how do I self-spam my codebase

It’s great to get from zero to something in some new JS framework but for your core competancy - it’s like outsourcing your thinking - always comes a cropper

(Book still being written)


> is a way to transmit concepts from one humans head to another (just like writing)

That's almost its primary purpose in my opinion... the CPU does not care about Ruby vs Python vs Rust, it's just executing some binary code instructions. The code is so that other people can change and extend what the system is doing over time and share that with others.


I get your point, but often the binary code instructions between those is vastly different.


The fact that we work with the high level languages rather than the binary code, despite all their inefficiencies, speaks to the human aspect being pretty important in the equation.


This entire conversation is about tradeoffs, but I would note that some of my favorite engineers that I've had the pleasure of knowing are: 1) very fast and 2) know exactly what the binary code of the thing they are trying to do looks like

There's a (3) where they'll quickly confirm their hypothesis using godbolt (or similar) if in doubt or they want to actually think in binary.

Fortunately for the programming community, many of us are able to create useful or interesting things without that kind of depth


I think a lot of the traditional teachings of "rhetoric" can apply to coding very naturally—there's often practically unlimited ways to communicate the same semantics precisely, but how you lay the code out and frame it can make the human struggle to read it straightforward to overcome (or near-impossible, if you look at obfuscation).


Computational thinking is more important than software per se.

Computational thinking is the mathematical thinking.


What makes an apprentice successful is learning the rules of thumb and following them.

What makes a journeyman successful is sticking to the rules of thumb, unless directed by a master.

What makes a master successful is knowing why the rules of thumb exist, what their limits are, when to not follow them, and being able to make up new rules.


There’s also the effect that a certain code structure that’s clearer for a senior dev might be less clear for a junior dev and vice versa.


Or rather, senior devs have learned to care more for having clear code rather than (over-)applying principles like DRY, separation of concerns etc., while juniors haven't (yet)...


I know it's overused, but I do find myself saying YAGNI to my junior devs more and more often, as I find they go off on a quest for the perfect abstraction and spend days yak shaving as a result.


Yes! I work with many folks objectively way younger and smarter than me. The two bad habits I try to break them of are abstractions and what ifs.

They spend so much time chasing perfection that it negatively affects their output. Multiple times a day I find myself saying 'is that a realistic problem for our use case?'

I don't blame them, it's admirable. But I feel like we need to teach YAGNI. Anymore I feel like a saboteur, polluting our codebase with suboptimal solutions.

It's weird because my own career was different. I was a code spammer who learned to wrangle it into something more thoughtful. But I'm dealing with overly thoughtful folks I'm trying to get to spam more code out, so to speak.


I’ve had the opposite experience before. As a young developer, there were a number of times where I advocated for doing something “the right way” instead of “the good enough way”, was overruled by seniors, and then later I had to fix a bug by doing it “the right way” like I’d wanted to in the first place.

Doing it the right way from the start would have saved so much time.


This thread is a great illustration of the reality that there are no hard rules, judgement matters, and we don't always get things right.

I'm pretty long-in-the-tooth and feel like I've gone through 3 stages in my career:

1. Junior dev where everything was new, and did "the simplest thing that could possibly work" because I wasn't capable of anything else (I was barely capable of the simple thing).

2. Mid-experience, where I'd learned the basics and thought I knew everything. This is probably where I wrote my worst code: over-abstracted, using every cool language/library feature I knew, justified on the basis of "yeah, but it's reusable and will solve lots of stuff in future even though I don't know what it is yet".

3. Older and hopefully a bit wiser. A visceral rejection of speculative reuse as a justification for solving anything beyond the current problem. Much more focus on really understanding the underlying problem that actually needs solved: less interest in the latest and greatest technology to do that with, and a much larger appreciation of "boring technology" (aka stuff that's proven and reliable).

The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time. There are judgements all the way through that: sometimes deciding to invest in more foundational code, but by default sticking to YAGNI. Most of all is seeing my value not as weilding techno armageddon, but solving problems for users and customers.

I still have a deep fascination with exploring and understanding new tech developments and techniques. I just have a much higher bar to adopting them for production use.


We all go through that cycle. I think the key is to get yourself through that "complex = good" phase as quickly as possible so you do the least damage and don't end up in charge of projects while you're in it. Get your "Second System" (as Brooks[1] put it) out of the way as quick as you can, and move on to the more focused, wise phase.

Don't let yourself fester in phase 2 and become (as Joel put it) an Architecture Astronaut[2].

1: https://en.wikipedia.org/wiki/Second-system_effect

2: https://www.joelonsoftware.com/2001/04/21/dont-let-architect...


Heh, I've read [2] before but another reading just now had this passage stand out:

> Another common thing Architecture Astronauts like to do is invent some new architecture and claim it solves something. Java, XML, Soap, XmlRpc, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!

> I’m not saying there’s anything wrong with these architectures… by no means. They are quite good architectures. What bugs me is the stupendous amount of millennial hype that surrounds them. Remember the Microsoft Dot Net white paper?

Nearly word-for-word the same thing could be said about JS frameworks less than 10 years ago.


Both React and Vue are older than 10 years old at this point. Both are older than jQuery was when they were released, and both have a better backward compatibility story. The only two real competitors not that far behind. It's about time for this crappy frontend meme to die.

Even SOAP didn't really live that long before it started getting abandoned en masse for REST.

As someone who was there in the "last 12 months" Joel mentions, what happened in enterprise is like a different planet altogether. Some of this technology had a completely different level of complexity that to this day I am not able to grasp, and the hype was totally unwarranted, unlike actual useful tech like React and Vue (or, out of that list, Java and .NET).


> Some of this technology has a completely different level of complexity that to this day I am not able to grasp

Enterprise JavaBeans mentioned?


That's another great example!


> The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time.

I think this takes a kind of humility you can't teach. At least it did for me. To learn this lesson I had to experience in reality what it's actually like to work on software where I'd piled up a bunch of clever ideas and "general solutions". After doing this enough times I realized that there are very few general solutions to real problems, and likely I'm not smart enough to game them out ahead of time, so better to focus on things I can actually control.


> Most of all is seeing my value not as wielding techno armageddon, but solving problems for users and customers

Also later in my career, I now know: change begets change.

That big piece of new code that “fixes everything” will have bugs that will only be discovered by users, and stability is achieved over time through small, targeted fixes.


> The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time.

Thank you for putting so eloquently my own fumbling thoughts. Perfect explanation.


Here is an unwanted senior tip, in many consulting projects without the “the good enough way” first, there isn't anything left for doing “the right way” later on.


Why inflict that thinking on environments that aren’t consulting projects if you don’t have to? That kind of thinking is a big contributor to the lack of trust in consultants to do good work that is in the client’s best interests rather than the consultants’. We don’t need employers to start seeing actual employees in the same way too.


The important bit is figuring out if those times where "the right way" would have helped outweigh the time saved by defaulting to "good enough".

There are always exceptions, but there's typically order of magnitude differences between globally doing "the right thing" vs "good enough" and going back to fix the few cases where "good enough" wasn't actually good enough.


Only long experience can help you figure this out. All projects should have at least 20% of the developers who have been there for more than 10 years so they have background context to figure out what you will really need. You then need at least 30% of your developers to be intended to be long term employees but they have less than 10 years. In turn that means never more than 50% of your project should be short term contractors. Nothing wrong with short term contractors - they often can write code faster than the long term employees (who end up spending a lot more time in meetings) - but their lack of context means that they can't make those decisions correctly and so need to ask (in turn slowing down the long term employees even more)

If you are on a true green field project - your organization has never done this before good luck. Do the best you can but beware that you will regret a lot. Even if you have those long term employees you will do things you regret - just not as much.


I don’t like working in teams where some people have been there for much longer than everyone else.

It’s very difficult to get opportunities for growth. Most of the challenging work is given to the seniors, because it needs to be done as fast as possible, and it’s faster in the short term for them to do it than it would be for you to do with with their help.

It’s very difficult for anyone else to build credibility with stakeholders. The stakeholders always want a second opinion from the veterans, and don’t trust you to have already sought that opinion before proceeding, if you thought it was necessary to do so (no matter how many times you demonstrate that you do this). Even if the senior agrees with you, the stakeholder’s perception isn’t that you are competent, it’s that you were able to come to the right conclusion only because the senior has helped you.


> then later I had to fix a bug

How much later? Is it possible that by delivering sooner your team was able to gain insight and/or provide value sooner? That matters!


In many cases, we didn’t deliver sooner than we could have, because my solution had roughly equivalent implementation costs to the solution that was chosen instead. In some cases the bug was discovered before we’d even delivered the feature to the customers at all.


Ah, but that’s assuming the ‘right way’ path went perfectly and didn’t over-engineer anything. In reality, the ‘right way’ path being advocated for, statistically will also waste a lot of time, and over-engineering waste can and does grow exponentially, while under-engineering frequently only wastes linear and/or small amounts of time, until the problem is better understood.

Having witnessed first-hand over-engineering waste millions of dollars and years of time, on more than one occasion, by people advocating for the ‘right way’, I think tallying the time wasted upgrading an under-engineered solution is highly error prone, and that we need to assume that some percentage of time we’ll need to redo things the right way, and that it’s not actually a waste of time, but a cost that needs to be paid in search of whether the “right way” solution is actually called for, since it’s often not. The waste might be the lesser waste compared to something much worse, and it’s not generally possible to do the exact right amount of engineering from the start.

Someone here on HN clued me into the counter acronym to DRY, which is WET: write everything twice (or thrice) so the 2nd or 3rd time will be “right”. The first time isn’t waste, it’s necessary learning. This was also famously advocated by Fred Brooks: “Play to Throw One Away” https://course.ccs.neu.edu/cs5500f14/Notes/Prototyping1/plan...


> In reality, the ‘right way’ path being advocated for, statistically will also waste a lot of time, and over-engineering waste can and does grow exponentially, while under-engineering frequently only wastes linear and/or small amounts of time, until the problem is better understood.

The “right way” examples I’m thinking of weren’t over-engineering some abstraction that probably wasn’t needed. Picture replacing a long procedural implementation, filled with many separate deprecated methods, with a newer method that already existed and already had test coverage proving it met all of the requirements, rather than cramming another few lines into the middle of the old implementation that had no tests. After all, +5 -2 without any test coverage is obviously better than +1 -200 with full test coverage, because 3 is much smaller than 199.


You make a strong case, and you were probably right. It’s always hard to know in a discussion where we don’t have the time and space to share all the details. There’s a pretty big difference between implementing a right way from scratch and using an existing right way that already has test coverage, so that’s an important detail, thank you for the context.

Were there reasons the senior devs objected that you haven’t shared? I have to assume the senior devs had a specific reason or two in each case that wasn’t obviously wrong or idiotic, because it’s quite common for juniors to feel strongly about something in the code without always being able to see the larger team context, or sometimes to discount or disbelieve the objections. I was there too and have similar stories to you, and nowadays sometimes I manage junior devs who think I’m causing them to waste time.

I’m just saying in general it’s healthy to assume and expect imperfect use of time no matter what, and to assume, even when you feel strongly, that the level of abstraction you’re using probably isn’t right. By the Brooks adage, the way your story went down is how some people plan for it to work up front, and if you’d expected to do it twice, then it wouldn’t seem as wasteful, right?


Everything in moderation, even moderation.


This isn't meant to be taken too literally or objectively, but I view YAGNI as almost a meta principle with respect to the other popular ones. It's like an admission that you won't always get them right, so in the words of Bukowski, "don't try".


Agreed. I’ve been trying to dial in a rule of thumb:

If you aren’t using the abstraction on 3 cases when you build it, it’s too early.

Even two turns into a higher bar than I expected.


Your documentation will tell when you need an abstraction. Where there is something relevant to document, there is a relevant abstraction. If its not worth documenting, it is not worth abstracting. Of course, the hard part is determining what is actually relevant to document.

The good news is that programmers generally hate writing documentation and will avoid it to the greatest extent possible, so if one is able to overcome that friction to start writing documentation, it is probably worthwhile.

Thus we can sum the rule of thumb up to: If you have already started writing documentation for something, you are ready for an abstraction in your code.


It's more case by case for me. A magic number should get a named constant on its first use. That's an abstraction.


C++ programmers decided against NULL, and for well over a decade, recommended using a plain 0. It was only recently that they came up with a new name: nullptr. Sigh.


That had to do with the way NULL was defined, and the implications of that. The implication carried over from C was that NULL would always be null pointer as opposed to 0, but in practice the standard defined it simply as 0 - because C-style (void*)0 wasn't compatible with all pointer types anymore - so stuff like:

   void foo(void*);

   void foo(int); 

   foo(NULL);
would resolve to foo(int), which is very much contrary to expectations for a null pointer; and worse yet, the wrong call happens silently. With foo(0) that behavior is clearer, so that was the justification to prefer it.

On the other hand, if you accept the fact that NULL is really just an alias for 0 and not specifically a null pointer, then it has no semantic meaning as a named constant (you're literally just spelling the numeric value with words instead of digits!), and then it's about as useful as #define ONE 1

And at the same time, that was the only definition of NULL that was backwards compatible with C, so they couldn't just redefine it. It had to be a new thing like nullptr.

It is very unfortunate that nullptr didn't ship in C++98, but then again that was hardly the biggest wart in the language at the time...


When you thought you made "smart" solutions and many years later you have to go in and fix bugs in it, is usually when you learn this.


There is a human side to this which I am going through right now. The first full framework I made is proving to be developer unfriendly in the long run, I put more emphasis on performance than readability (performance was the KPI we were trying to improve at the time). Now I am working with people who are new to the codebase, and I observed they were hesitant to criticize it in front of me. I had to actively start saying "lets remove <frame work name>, its outdated and bad". Eventually I found it liberating, it also helped me detach my self worth from my work, something I struggle with day to day.


My 'principle' for DRY is : twice is fine, trice is worth an abstraction (if you think it has a small to moderate chance to happen again). I used to apply it no matter what, soi guess it's progress...


I really dislike how this principle ends up being used in practice.

A good abstraction that makes actual sense is perfectly good even when it's used only once.

On the other hand, the idea of deduplicating code by creating an indirection is often not worth it for long-term maintenance, and is precisely the kind of thing that will cause maintenance headaches and anti-patterns.

For example: don't mix file system or low-level database access with your business code, just create a proper abstraction. But deduplicating very small fragments of same-abstraction-level can have detrimental effects in the long run.


I think the main problem with these abstractions that they are merely indirections in most cases, limiting the usefulness to several use cases (sometimes to things that never going to be needed).

To quote Dijsktra: "The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise."


I can't remember where I picked it up from, but nowadays I try to be mindful of when things are "accidentally" repeated and when they are "necessarily" repeated. Abstractions that encapsulate the latter tend to be a good idea regardless of how many times you've repeated a piece of code in practice.


Exactly, but distinguishing the two that requires an excellent understanding of the problem space, and can’t at all be figured out in the solution space (i.e., by only looking at the code). But less experienced people only look at the code. In theory, a thousand repetitions would be fine if each one encodes an independent bit of information in the problem space.


The overarching criterion really is how it affects locality of behaviour: repeating myself and adding an indirection are both bad, the trick is to pick the one that will affect locality of behaviour the least.

https://loup-vaillant.fr/articles/source-of-readability#avoi...


WET, write everything twice


"Better a little copying than a little dependency" - Russ Cox


Do you use a copy paste detector to find third copy?


twice is fine... except some senior devs apply it to the entire file (today I found the second entire file/class copied and pasted over to another place... the newer copy is not used either)


As someone who recently had to go over a large chunk of code written by myself some 10-15 years ago I strongly agree with this sentiment. Despite being a mature programmer already at that time, I found a lot of magic and gotchas that were supposed to be, and felt at the time, super clever, but now, without a context, or prior version to compare, they are simply overcomplicated.


I find that it’s typically the other way around as things like DRY, SOLID and most things “clean code” are hopeless anti-patterns peddled by people like Uncle Bob who haven’t actually worked in software development since Fortran was the most popular language. Not that a lot of these things are bad as a principle. They come with a lot of “okish” ideas, but if you follow them religiously you’re going to write really bad code.

I think the only principle in programming I think can be followed at all times is YAGNI (you aren’t going to need it). I think every programming course, book, whatever should start by telling you to never, ever, abstract things before you absolutely can’t avoid it. This includes DRY. It’s a billion times better to have similar code in multiple locations that are isolated in their purpose, so that down the line, two-hundred developers later you’re not sitting with code where you’ll need to “go to definition” fifteen times before you get to the code you actually need to find.

Of course the flip-side is that, sometimes, it’s ok to abstract or reuse code. But if you don’t have to, you should never ever do either. Which is exactly the opposite of what junior developers do, because juniors are taught all these “hopeless” OOP practices and they are taught to mindlessly follow them by the book. Then 10 years later (or like 50 years in the case of Uncle Bob) they realise that functional programming is just easier to maintain and more fun to work with because everything you need to know is happening right next to each other and not in some obscure service class deep in some ridiculous inheritance tree.


The problem with repeating code in multiple places is that when you find a bug in said code, it won't actually be fixed in all the places where it needs to be fixed. For larger projects especially, it is usually a worthwhile tradeoff versus having to peel off some extra abstraction layers when reading the code.

The problems usually start when people take this as an opportunity to go nuts on generalizing the abstraction right away - that is, instead of refactoring the common piece of code into a simple function, it becomes a generic class hierarchy to cover all conceivable future cases (but, somehow, rarely the actual future use case, should one arise in practice).

Most of this is just cargo cult thinking. OOP is a valid tool on the belt, and it is genuinely good at modelling certain things - but one needs to understand why it is useful there to know when to reach for it and when to leave it alone. That is rarely taught well (if at all), though, and even if it is, it can be hard to grok without hands-on experience.


We agree, but we’ve come to different conclusions. Probably based on our experiences. Which is why I wanted to convey that I think you should do these things in moderation. I almost never do classes, and much rarer inheritance, as an example. That doesn’t mean I wouldn’t make a “base class” containing things like “owned by, updated by, some time stamp” or whatever you would want added to every data object in some traditional system and then inherit that. I would, I might even make multiple “base classes” if it made sense.

What I won’t do, however, is abstract code until I have to. More than that as soon as that shared code stops being shared, I’ll stop doing DRY. Not because DRY is necessarily bad, but because of the way people write software which all too often leads to a dog which will tell you dogs can’t fly if you cal fly() on it. Yes, I know that is ridiculous, but I’ve never seen an “clean” system that didn’t eventually end up like that. People like Uncle Bob will tell you that is because people misunderstood the principles, and they’d be correct. Maybe the principles are simply bad if so many people misunderstand them though?


good devs*, not all senior devs have learned that, sadly. As a junior dev I've worked under the rule of senior devs who were over-applying arbitrary principles, and that wasn't fun. Some absolute nerds have a hard time understanding where their narrow expertise is meant to fit, and they usually don't get better with age.


I bumped into that issue, and it caused a lot of friction between me and 3 young developers I had to manage.

Ideas on how to overcome that?


Teaching.

I had this problem with an overzealous junior developer and the solution was showing some different perspectives. For example John Ousterhout's A Philosophy of Software Design.


I tried this but they just come back with retorts like "OK boomer" which tends to make the situation even worse.

How do you respond to that?


The sibling comment says "fire them". That sounds glib, but it's the correct solution here.

From what you've described, you have a coworker who is not open to learning and considering alternative solutions. They are not able to defend their approach, and are instead dismissive (and using an ageist joke to do it). This is toxic to a collaborative work environment.

I give some leeway to assholes who can justify their reasoning. Assholes who just want their way because it's their way aren't worth it and won't make your product better.


Or, perhaps better, just let that hang for a moment - long enough to become uncomfortable - and then say "Try again."

As others have said, if they can't or won't get that that's unacceptable behavior, fire them. (jerf is more patient than I am...)


This is a discriminatory statement and it should be taken seriously.


Fire them.


To be honest, at the point where they are being insulting I also agree firing them is a very viable alternative.

However, to answer the question more generally, I've had some success first acknowledging that I agree the situation is suboptimal, and giving some of the reasons. These reasons vary; we were strapped for time, we simply didn't know better yet, we had this and that specific problem to deal with, sometimes it's just straight up "yeah I inherited that code and would never have done that", honestly.

I then indicate my willingness to spend some time fixing the issues, but make it clear that there isn't going to be a Big Bang rewriting session, but that we're going to do it incrementally, with the system working the whole time, and they need to conceive of it that way. (Unless the situation is in the rare situation where a rewrite is needed.) This tends to limit the blast radius of any specific suggestion.

Also, as a senior engineer, I do not 100% prioritize "fixing every single problem in exactly the way I'd do it". I will selectively let certain types of bad code through so that the engineer can have experience of it. I may not let true architecture astronautics through, but as long as it is not entirely unreasonable I will let a bit more architecture than perhaps I would have used through. I think it's a common fallacy of code review to think that the purpose of code review is to get the code to be exactly as "I" would have written it, but that's not really it.

Many people, when they see this degree of flexibility, and that you are not riding to the defense of every coding decision made in the past, and are willing to take reasonable risks to upgrade things, will calm down and start working with you. (This is also one of the subtle reasons automated tests are super super important; it is far better for them to start their refactoring and have the automated tests explain the difficulties of the local landscape to them than a developer just blathering.)

There will be a set that do not. Ultimately, that's a time to admit the hire was a mistake and rectify it appropriately. I don't believe in the 10x developer, but not for the usual egalitarian reasons... for me the problem is I firmly, firmly believe in the existence of the net-negative developer, and when you have those the entire 10x question disappears. Net negative is not a permanent stamp, the developer has the opportunity to work their way out of it, and arguably, we all start there both as a new developer and whenever we start a new job/position, so let me sooth the egalitarian impulse by saying this is a description of someone at a point in time, not a permanent label to be applied to anyone. Nevertheless, someone who insists on massive changes, who deploys morale-sapping insults to get their way, whose ego is tied up in some specific stack that you're not using and basically insists either that we drop everything and rewrite now "or else", who one way or another refuses to leave "net negative" status... well, it's time to take them up on the "or else". I've exaggerated here to paint the picture clearly in prose, but, then again, of the hundreds of developers I've interacted with to some degree at some point, there's a couple that match every phrase I gave, so it's not like they don't exist at all either.


You mean they literally say "ok boomer"? If so they are not mature enough for the job. That phrase is equivalent to "fuck off" with some ageism slapped on top and is totally unacceptable for a workplace.


That's exactly what I try to do. I think it's an unpopular opinion though, because there are no strict rules that can be applied, unlike with pure ideologies. You have to go by feel and make continuous adjustments, and there's no way to know if you did the right thing or not, because not only do different human minds have different limits, but different challenges don't tax every human mind to the same proportional extent.

I get the impressions that programmers don't like ambiguity in general, let alone in things they have to confront in real life.


> there are no strict rules that can be applied

The rules are there for a reason. The tricky part is making sure you’re applying them for that reason.


I don't know what your comment has to do with my comment.


My intro to programming was that I wanted to be a game developer in the 90s. Carmack and the others at Id were my literal heroes.

Back then, a lot of code optimizations was magic to me. I still just barely understand the famous inverse square root optimization in the Quake III Arena source code. But I wanted to be able to do what those guys were doing. I wanted to learn assembly and to be able to drop down to assembly and to know where and when that would help and why.

And I wasn't alone. This is because these optimizations are not obvious. There is a "mystique" to them. Which makes it cool. So virtually ALL young, aspiring game programmers wanted to learn how to do this crazy stuff.

What did the old timers tell us?

Stop. Don't. Learn how to write clean, readable, maintainable code FIRST and then learn how to profile your application in order to discover the major bottlenecks and then you can optimize appropriately in order of greatest impact descending.

If writing the easiest code to maintain and understand also meant writing the most performant code, then the concept of code optimization wouldn't even exist. The two are mutually exclusive, except in specific cases where it's not and then it's not even worth discussing because there is no conflict.

Carmack seems to acknowledge this in his email. He realizes that inlining functions needs to be done with careful judgment, and the rationale is both performance and bug mitigation. But that if inlining were adopted as a matter of course, a policy of "always inline first", the results would quickly be an unmaintainable, impossible to comprehend mess that would swing so far in the other direction that bugs become more prominent because you can't touch anything in isolation.

And that's the bane of software development: touch one thing and end up breaking a dozen other things that you didn't even think about because of interdependence.

So we've come up with design patterns and "best practices" that allow us to isolate our moving parts, but that has its own set of trade-offs which is what Carmack is discussing.

Being a 26 year veteran in the industry now (not making games btw), I think this is the type of topic that you need to be very experienced to be able to appreciate, let alone to be able to make the judgment calls to know when inlining is the better option and why.


That doesn't seem like holding two opposing thoughts. Why is balancing contradictory actions to optimize an outcome different to weighing pros and cons?


What I meant to say was that when people encounter contradictory statements like "always inline one-time functions" and "breakdown functions into easy to understand blocks", they try to only pick one single rule, even if they consider the pros and cons of each rule.

After a while they consider both rules as useful, and will move to a more granular case-by-base analysis. Some people get stuck at rule-based thinking though, and they'll even accuse you of being inconsistent if you try to do case-by-case analysis.


You are probably reaching for Hegel’s concept of dialectical reconciliation


Not sure, didn't Hegel say that there should be a synthesis step at some point? My view is that there should never be a synthesis when using these principles as tools, as both conflicting principles need to always maintain opposites.

So, more like Heraclitus's union of opposites maybe if you really want to label it?


the synthesis would be the outcome maybe? writing code that doesn't follow either rule strictly:

> Concretely related to the topic, I've often found myself inlining short pieces of one-time code that made functions more explicit, while at other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on. In both cases I was creating inconsistencies that younger developers nitpick -- I know I did.


On a positive note, most AI-gen code will follow a style that is very "average" of everything it's seen. It will have its own preferred way of laying out the code that happens to look like how most people using that language (and sharing the code online publicly), use it.


> other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on

Absolutely, I'll break up a long block of code into several functions, even if there is nowhere else they will be called, just to make things easier to understand (and potentially easier to test). If a function or procedure does not fit on one screen, I will almost always break it up.

Obviously "one screen" is an approximation, not all screens/windows are the same size, but in practice for me this is about 20-30 lines.


My go to heuristic for how to break up code is white board or draw up in lucidchart your solution to explain it to another dev. If your methods don't match the whiteboard refactor.


To a certain sort of person, conversation is a game of arriving at these antithesis statements:

   * Inlining code is the best form of breaking up code. 
   * Love is evil.
   * Rightwing populism is a return to leftwing politics. 
   * etc.

The purpose is to induce aporia (puzzlement), and hence make it possible to evaluate apparent contradictions. However, a lot of people resent feeling uncertain, and so, people who speak this way are often disliked.


To make an advance in a field, you must simultaneously believe in what’s currently known as well as distrust that the paradigm is all true.

This gives you the right mindset to focus on advancing the field in a significant way.

Believing in the paradigm too much will lead to only incremental results, and not believing enough will not provide enough footholds for you to work on a problem productively.


> My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode)

I think you would appreciate the philosophy of the Grug Brained Developer: https://grugbrain.dev


> I was creating inconsistencies that younger developers nitpick

Obligatory: “A foolish consistency is the hobgoblin of little minds"

Continued because I'd never read the full passage: "... adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — 'Ah, so you shall be sure to be misunderstood.' — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood.” ― Ralph Waldo Emerson, Self-Reliance: An Excerpt from Collected Essays, First Series


> limits of the human mind when more and more AI-generated code will be used

We already have a technology which scales infinitely with the human mind: abstraction and composition of those abstractions into other abstractions.

Until now, we’ve focused on getting AI to produce correct code. Now that this is beginning to be successful, I think a necessary next step for it to be useful is to ensure it produces well-abstracted and clean code (such that it scales infinitely)


That’s undoubtedly a Zelda Fitzgerald quote (her husband plagiarized her shamelessly).

As a consequence of the Rule of Three, you are allowed to have rules that have one exception without having to rethink the law. All X are Y except for Z.

I sometimes call this the Rule of Two. Because it deserves more eyeballs than just being a subtext of another rule.


Wait, isn't that just Doublethink from 1984? Holding two opposing thoughts is a sign that your mental model of the world is wrong and that it needs to be fixed. Where have you heard that maxim?


No you've got it completely backwards. Reality has multiple facets (different statements, all of which can be true) and a mental model that insists on a singular judgement is reductionist, missing the forest for the trees. Light is a wave and a particle. People are capable of good and bad. The modern world is both amazing and unsustainable. etc.

Holding multiple truths is a sign that you understand the problem. Insisting on a singular judgement is a sign that you're just parroting catchy phrases as a short cut to thinking; the real world is rarely so cut and dry.


It's not referring to cognitive dissonance.


[flagged]


So this maxim can both be used for good and for bad. Extra points for this maxim.


A metamaxim?


Stretch goal: hold three


Nah. That's what the Monk is for.


A person of culture, I see.

Electric Monks were made for a reason.

Surprisingly pertinent to the current discussion.


apposite to the opposite


A safe work contribution plan for the year: Hold 1+ (stretch 3) opposing thoughts at a time.


"hold one opposing thought" could be a zen koan


Indoctrination is the exact opposite.


Sure, but the maxim can be used to inject this 'exact opposite', in perfect accordance with the maxim!


Maybe "indoctrination" was a poor choice of word here. The problem with this maxim is that it welcomes moral relativism.

This can be bad on the assumption that whoever is exposed to the maxim is not a proponent of "virtue ethics" (I use this as a catch-all term for various religious ethics doctrines, the underlying idea is that moral truths are given to people by a divine authority rather than discovered by studying human behavior, needs and happiness). In this situation, the maxim is an invitation to embrace ideas that aren't contradictory to one's own, but that live "outside the system", to put them on equal footing.

To make this more concrete, let's suppose the subject of child brides. Some religions have no problem with marrying girls of any age to men of any age. Now, the maxim suggests that no matter what your moral framework looks like, you should accept that under some circumstances it's OK to have child marriages. But, this isn't a contradiction. There's no ethical theory that's not based on divine revelation that would accept such a thing. And that's why, by and large, the Western society came to treat child marriages as a crime.

Contradictions are only possible when two parties agree on the premises that led to contradicting conclusion, and, in principle, should be possible to be resolved by figuring out which party had a faulty process that derived a contradicting opinion. Resolving such contradictions is a productive way forward. But, the kind of "disagreement" between religious ethics and "derived" ethics is where the premises are different. So, there can be no way forward in an argument between the two, because the only way the two can agree is if one completely abandons their premises.

Essentially, you can think about it as if two teams wanted to compete in some sport. If both are playing soccer, then there's a meaning to winning / losing, keeping the score, being good or bad at the game. But, if one team plays soccer while another team is playing chess... it just doesn't make sense to pit them against each other.


> maxim suggests that no matter what your moral framework looks like, you should accept that under some circumstances it's OK to have child marriages

You seem to have either misread the maxim, or misunderstood it.

The maxim is not that an intelligent person -must- hold two contradictory thoughts in their head at once - rather, that they should be able to. Being "able to" do something, does not mean one does it in all cases.

To say that the maxim suggests that someone "should" accept that something that is bad, is sometimes good, is a plain misreading of the text. All it's saying is that people -can- do this, if they so choose.


In this context, it doesn't matter if they "must" or "should be able to". No, I didn't misunderstand the maxim. No, I didn't mean that it has to happen in all cases. You are reading something into what I wrote that I didn't.

The maxim is not used by religious people to its intended effect. Please read again, if you didn't see it the first time. The maxim is used as a challenge that can be rephrased as: "if you are as intelligent as you claim, then you should be able to accept both what you believe to be true and whatever nonsense I want you to believe to be true."


> The maxim is not used by religious people to its intended effect.

Your comment literally says "the maxim suggests".

If that wasn't what you were saying, then your comment is misphrased.

If that -was- what you were saying, then I reiterate that, no, the maxim does not suggest that. You (or whatever hypothetical person you're referring to) are the one suggesting it, not the maxim.

It doesn't matter how you rephrase it - "should be able to" is not the same as "must". "Able-bodied people should be able to jump off the top of a building." That's a perfectly valid and true statement - jumping off of things is within the physical capabilities of the able-bodied. But that statement, however true, does not suggest that one must jump off the top of a building to prove that one is able-bodied.

> No, I didn't mean that it has to happen in all cases.

If it doesn't have to happen in all cases, then an intelligent person can simply say "no, even though I am -able to- accept contradictory ideas, in this case I still reject child marriage in all contexts". Clearly you would agree that this is perfectly compatible with the maxim. So, in what way is the maxim being harmful here?

In reality, your comment has almost nothing to do with the maxim itself, and is mostly just about people using religion and rhetoric to manipulate others. Such people would use whatever tool they have available - with or without the existence of the maxim.


Doublethink!


As a tool, it's a wedge to break indoctrination and overcome bias. It leads to more pragmatic and less ideological thinking. The subject is compelled to contrast opposing views and consider the merits of each.

Any use by ideological groups twists the purpose of the phrase on its head. The quote encourages thinking and consideration. You'd have to turn off your brain for this to have the opposite effect.


> Any use by ideological groups twists the purpose of the phrase on its head. The quote encourages thinking and consideration. You'd have to turn off your brain for this to have the opposite effect.

Well, it would not be too surprising that it can be used to, for example, make people think that they can trust science and also believe in some almighty, unexplainable by science divine entity.


Thoughts like this miss the purpose and significance of the maxim being discussed. Science doesn't disprove an "almighty, unexplainable divine entity" any more than an "almighty, unexplainable divine entity" could also provide science as a means to understand the nature of things.

Careful you don't fall into the trap of indoctrination. :)


You can trust science, but science doesn't cover all of reality.

My imaginary friend does, buy my magic book.


The US has a statutory rapist and someone who believes in active weather manipulation seated in Congress. It's easy to get the masses to turn off their brains.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: