Hacker News new | past | comments | ask | show | jobs | submit login
Jonathan Blow on Societal Collapse (2019) (gist.github.com)
104 points by beefman on Aug 26, 2022 | hide | past | favorite | 146 comments



Controversial—I know—(and I'm happy to hear counterarguments) but I think there's a much simpler way of accounting for the present state of software quality.

It amounts to 'tolerance'. Commercial software is optimized with respect to a set of business goals, and one dimension of the space that process operates on is software quality. Beyond a certain threshold, increasing quality has diminishing utility. (I.e. there's effectively a tolerance for quality; it can be imperfect and not change the operation of the machine—i.e. business—which it's a component of.)

The present state of software is precisely in the neighborhood of where that added utility begins rapidly diminishing.

(This assumes that the business in question is competent, which we can approximate by observing that they survive. Outside the possibility of that being incorrect, this implies that the level of software quality hovers right around where it it's good enough that improving it more wouldn't significantly impact customer purchasing decisions. Of course this is not an ideal state of affairs—but how could you expect otherwise? —and interpreting it as a sign of societal collapse... seems a bit absurd to me.)


As soon as Blow and others start pontificating about how modern software is terrible, I get frustrated; my viewpoint differs with theirs so drastically that I don't even know where to begin.

I might start somewhere around here: I think the fact that modern software is "terrible" is actually a sign of the dramatic success, not failure, of programming.

The reasons that Blow et all normally cite as why software is terrible are usually that, 20 and 30 years ago, engineers were able to pull off superhuman feats compared to the engineers of today. I accept this unquestionably. They wrote software that ran quicker and was more space-efficient than modern software. But running software that fit in kilobytes of RAM was never the goal of software engineering. The goal was providing value to your users.

Blow et. al. seem to think that efficient software is the goal. I couldn't disagree more. The goal is always to produce things that users can actually use. And if that means doing hideously inefficient things, like running your app inside a chrome shell, so be it, if that means you delivered value faster.

And the amount of engineering effort per unit of value created has gone down so much today compared to 30 years ago.

In the past, engineers could squander days or weeks trying to make code more memory efficient, or finding the exact right sequence of ASM instructions to waste less CPU cycles. These are issues I don't even have to think about. Heck, I can do extremely stupid and inefficient things, like use a garbage collector, or electron, and for the most part, users don't even care[1]. What an incredible success!

Essentially, I think what Blow is seeing when he cites the demise of modern software engineering is in fact that the bar to writing usable software has dropped immensely. I can understand why this is a frustrating thing for him, because it effectively means that his skillset is not as valuable as it used to be. But I just can't agree at all. How amazing that we can be so wasteful and yet still produce such great software, and furthermore what an incredible success that engineers like me don't need to even think about ASM and RAM efficiency.

[1] Yes, some users do care, and I think that a disproportionate majority of them hang out on Hacker News. :P


I too find his presentation pretentious and his claims suspect. He has released two games in his career so far and an as-yet-to-be-released compiler that he’s been toying with for years: and he calls modern software developers unproductive!

However I think efficiency still matters depending on the use case and to his credit I don’t think he says that everyone has to care about efficiency.

I think the point he makes is that we might be losing the ability to produce efficient code because we’re not teaching these techniques. And that is perhaps why we buy $3000 Facebook browsing machines that don’t feel any faster than computers 20 years ago. Often they are slower.

If we only teach folks how to script Unity and Unreal how are we supposed to get the next generation of engine developers to build the foundations of the technology?


> I think the point he makes is that we might be losing the ability to produce efficient code because we’re not teaching these techniques. And that is perhaps why we buy $3000 Facebook browsing machines that don’t feel any faster than computers 20 years ago. Often they are slower.

I think you are just paraphrasing his point, so don't take this as a shot in your direction, but anybody who remembers computers 20 years ago being faster either has some serious rose tinted glasses, or was not using computers 20 years ago. They were incredibly buggy and slow. Basic users crashed their Windows boxes often enough that "blue screen of death" was a common phrase (which is to say, we're advanced users here -- we can even get Linux to crash -- but normal computers are pretty good nowadays if you just use them like a normal person).

If they felt faster, that's because we were using a text editors rather than office suites, because we didn't want to wait minutes for an office suite to start up (and then crash). You can still use a text editor and vim at least is still pretty snappy.

Like surely we can all remember the first solid state drives we installed, right? The feeling of not waiting for programs to load was remarkable. Now we're just used to having programs start instantly.

Things are actually quite good, we're just used to it.

I mean, the internet is kind of a mixed bag. Javascript is annoying and everywhere. But on the other hand, remember when Java applets were a thing?


I find https://danluu.com/input-lag/ to be quite convincing and a direct contradiction to your point. He has some analyses of various other elements of computing, too. We certainly haven't gotten faster across the board and I'm still amazed at how slowly many of the programs I use today actually load.


I remember reading that piece, and found it to be interesting and accurate, but somewhat irrelevant. Most people just don't care about latency that much. If we go back to an ancestor high up in this thread, "low latency input and display" is not a goal of computing. "Enabling people to do things they find useful and/or have value" is the goal. And it just turns out that, for most tasks -- even most gaming tasks (or other things where you might say "low latency is important") -- low latency just is nowhere near as important as just being able to do these things at all.

Sure, we might sometimes notice the latency and find it vaguely annoying, but we put up with it because modern computing (with all the attendant "complexity" or whatever that makes low-latency hard) enables us to do things that computers of 20 or 30 or 40 years ago were just completely incapable of doing.

And for the people who really do care about low latency, they can usually build something custom that gets them where they want to be -- as Dan Luu points out in his piece.


I tried to follow his advice for a while but didn't ever notice a perceivable difference between, say, st and alacrity. I think the justification for why we care about latency, there, is a bit fuzzy -- the examples he provides are about stylus inputs, the feedback loop there is more direct and there's more physical experience. I can't tell you what you feel of course. Currently I use kitty. Perhaps I'm blessed by unusual latency insensitivity....


There came the trend of leaving your computer on instead of powering it off just because starting up was so slow.

So much has improved that Blow’s arguments about how much better computers used to be has an air of Old Man Yelling at Clouds to me.

Booting into Workbench off a floppy is fun and all but my current machine boots up so fast now it’s ridiculous.


> but anybody who remembers computers 20 years ago being faster either has some serious rose tinted glasses, or was not using computers 20 years ago.

Or they're using something like the grade book in Canvas, a very popular LMS, and finding that it is in fact slower than GeoCalc on a C-64.


> I think you are just paraphrasing his point, so don't take this as a shot in your direction, but anybody who remembers computers 20 years ago being faster either has some serious rose tinted glasses, or was not using computers 20 years ago... You can still use a text editor and vim at least is still pretty snappy.

This is a single analysis from a single person, but if you look at empiric data for latency in terminal text input, there is a pretty significant negative trend over time: https://danluu.com/input-lag/


That article is talking about a very narrow aspect of latency, things like "if I press a letter on my keyboard, how quickly does it show up on my screen?" And most computer users care much more about throughput than latency, within reason, anyway.

But 20 years ago, starting applications was slow, because spinning-rust hard drives were slow, CPUs were slow, and we didn't have as much RAM. Yes, applications executed fewer instructions to start up, and used less RAM back then, but today I type on a 13" laptop with 64GB of RAM, so much that my OS doesn't know what to do with most of it, to the point where the OS filesystem buffer cache is "only" using up 12GB, and 44GB is just sitting completely unused for anything. Yes, maybe it's absurd that running a desktop environment with a browser (granted, with over a thousand tabs open), some terminal emulator instances, and the one annoying Electron app I can't get rid of, consumes 8GB of RAM, which alone is three orders of magnitude more RAM than I had 20 years ago.

But the bottom line is that I, as a user, just don't need to care about resource utilization. I don't need to close my word processor when I want to play a game or watch a video. Oh, speaking of that -- in 2000, my CPU didn't have the ability to decode DVD-quality MPEG2 video in real time! If I tried, CPU usage would be pegged at 100%, and the video would still stutter and be unwatchable. (I'm not saying that there weren't CPUs in 2000 that could do this, just that mine could not, and I couldn't afford one that could.) Today I can stream 4k video from some storage location tens or hundreds of miles away, and decode it (on GPU or CPU) and play it back on a machine that sits on my lap (the aforementioned 2000-era CPU of course lived in a big tower case).

"Computers were faster 20 years ago" -- yeah, right.



I disagree, Microsoft Word on a Mac in the 90s was lightning fast and rock solid.

Few apps start up instantly in 2022. Many require a good network connection and/or a half a gigabyte of RAM, for no good reason.


> I think the point he makes is that we might be losing the ability to produce efficient code because we’re not teaching these techniques.

Is this true though? It honestly seems to me that it's more like we teach a lot more people to do those techniques than will ever need to use them, and that this results in a lot of frustration and / or time spent prematurely optimizing.


I think we'd have to define "efficient code." Like, not doing something big-O stupid? That's covered in computer science classes and as a science it is pretty human-understandable and timeless. Micro-optimizations? Perhaps between optimizing compilers and massive reorder buffers/massive caches/clever branch predictors, there's not as much need for that sort of thing.


College programs teach both things. Then you go into the working world and it's just not what most people spend much of their time doing, not because of some oversight or deterioration in society or something, but simply because there are more important things to do. And this frustrates a lot of people (especially young people) who loved that part of school and maybe even got into the whole thing because they loved this part of it. Then sometimes those frustrated people (or in order to avoid becoming frustrated), people spend a lot of time on it, even though it's really not the most important thing to work on. And that's fine and all, there are definitely worse things to spend time on, but it also makes sense that a lot of other people spend all their time on the higher priority things.


I don’t know if there’s ever been a comprehensive survey on the subject. But it does seem like a rare skill some times. And it’s not like we’re lacking examples of bloated, slow, and error prone software these days.

Is it getting worse? I don’t know. My impression has been that it’s a marvel it works so well when it sometimes seems like the whole enterprise is too complicated to work at all.

But maybe we have simply been conditioned to not want better. Errors are fine, leaking memory happens, security vulnerabilities are inevitable. What can you do?

Is it worse than it was 20 years ago? I don’t know. I’m not much younger than Blow and grew up programming on an Amiga. I do enjoy fond memories of programming a machine that I had a manual for and knew well. It is very different today… but also much easier and more inclusive of folks with different skill levels.


I don't know if we should necessarily use slow/buggy software as examples. IMO it's very conceivable that it's written by devs who know better, but they're not optimizing for those metrics. I've certainly been in situations where I've thought "well, I can reduce latency here by 100ms, but is that worth delaying the project by 2 days?"


Well, I wouldn't go that far. Blow is a smart guy, and games are hard to make. Two mainstream successes are more than I have. :P

> If we only teach folks how to script Unity and Unreal how are we supposed to get the next generation of engine developers to build the foundations of the technology?

Why can't we have both? Lowering the bar to allow less-experienced engineers to create things doesn't mean that we've prevented engineers with more experience from continuing to exist and thrive. We've just created a richer stratification of skill levels than previously existed.


Fair, I was being a bit harsh with his credentials. He’s also freely streaming and posting how bad modern software is.

And again I don’t think he said, in the talk mentioned in tfa, that we can’t have both. His primary concern seems to be that the scales have tipped too far one way.

I think there is some merit to that. Go to any game dev forum and ask about programming and 9 times out of 10 you will be guided towards using a game engine. The common wisdom is: don’t learn to program, just make the game.

And that is 100% fine advice and Blow admits that in the talk.

But knowledge transfer is the key here and if we’re not making enough space in the education of future programmers for that lower level part of the stack then maybe one day we (as in society) could forget how.

I still think that’s highly unlikely to be happening today but it’s an interesting thing to think about.

I find myself often frustrated by how much gobs of memory, fast drives, and cores we throw into modern machines and yet my browser can still stutter while scrolling a page.


I'm not really so worried about this knowledge transfer thing. I think programming knowledge is being transferred just fine. It's hard to see it, though, because there are several orders of magnitude more programmers now than there were 20 years ago. You don't need a constant X% of programmers to know how to build a game engine. You just need a couple hundred of them, or so, and as long as that knowledge is passed to a couple hundred of the next generation of programmers, that's fine. And if only a couple hundred -- or even a thousand -- programmers know how to build a game engine, it's easy to see how it might feel hard to find those people among the millions of programmers out there. But they're there, and they're mostly probably doing useful work and are propagating their knowledge appropriately.

I also think that many many many more people know how to write efficient programs than Blow or other doomsayers might fear. It's just that most of us have better things to do than micro-optimize or write our own engines or toolkits or whatever for everything, and the state of modern computing means that we don't have to bother -- even while many of us decry the inefficiency of things like Electron, or in attempting to write a high-scale distributed system in Python or Ruby. And I agree with you that it's dumb that we can throw an 8-core 4GHz CPU and 32GB of RAM at a web browser, and it can still stutter while scrolling down a page. We clearly haven't always found the best balance here. But that doesn't mean we've lost the ability to do so.

I think plenty of people have all this tribal programming knowledge, and they're doing just fine passing it along to the next generation of programmers. It's just that most people don't need to actually make use of that knowledge in their day-to-day work. Yes, I do believe that lack of use causes some atrophy, but I don't think that's the big problem that some would make it out to be.


> He has released two games in his career so far and an as-yet-to-be-released compiler that he’s been toying with for years: and he calls modern software developers unproductive!

In fairness he's been juggling multiple projects at the same time over the "for years" period that you mention: the new programming language, one major engine and game in said language, new releases for Witness and Braid, a secret game project, plus managing a game studio on top.

Also the two games he released "so far" are pretty popular and pretty well received by the critics _and_ they're programmed from scratch.

I think you're being a little uncharitable to someone that probably spends 75% of their waking life doing work (or ranting).


I don't think it's uncharitable at all. The point of writing software is to get it into the hands of users so they can use it.

I have a lot of respect for him for doing such high quality game work, mostly or entirely on his own. But let's not pretend he's some highly-productive programmer who churns out groundbreaking software at some amazing pace. I reserve such praise for people like Fabrice Bellard, who not only builds amazing things, but has built a lot of amazing things.

Tinkering around with a new language is cool and impressive, but it doesn't make you a "productive" programmer. Managing a game studio is useful work, but it's not "productive programming". I agree that his two games are great, and it's impressive how they were built, but... so what? He spent three years on Braid and seven years on The Witness. Is it "productive" to spend ten years writing two games? Not saying he was wrong to spend his time as he did; that's entirely his choice, and I know I personally have appreciated the output of those ten years. But I think many programmers have done much more productive things than building two games in ten years. I'm honestly not trying to shit on Jonathan Blow, but I think the hero worship I see over him is just a bit much.


Fwiw I would say he has been highly productive compared to most engineers. Two complete games of that quality nothing to sneeze at. But he does seem to take certain ideas much further than I would


I'd highlight the handmade.network software community, who superficiality side with Blow, but if you read their manifesto on their website, it puts a lot of emphasis on the end user's experience.

They cite Casey (of Handmade Hero), and I recall watching a casual lecture of his on a case study of Performant Software. In his conclusion he proposed that as innovative greenfield projects claim thr low hanging fruit of developing software markets, those markets will increasing reward competitors who can undercut on the basis of intentionally (or deeply) crafted software architectures and the associated benefits to users (stability and performance). This is could be refuted by the suggestion that early market participants can establish strong enough monopolies to overcome their technical debt, but still shows another source of motivation for the Handmade community's ideals.


I want them to be correct, but I agree with the sibling poster that getting to market first and developing network effects tends to win.

Perfect truly is the enemy of the good. "Good enough", by definition, is good enough, and perfection is rarely rewarded. Unless such perfection provides a huge leap ahead for users, which is rarely the case.


They will get to relearn about first mover advantage and switching costs


I think you're correct in this assessment. I've recently switched from many years of low-level programming, doing real time animation/graphics into the large scale SaaS world.

The things these two camps of software engineers care about are completely different. In the SaaS world it's all about interfacing between systems and managing complex business processes. There's often a lot of organization/political problems, along with auditing and legal requirements. There seems to be a lot more emphasis on developer productivity than end user performance.

Not that these engineers don't care about performance, but their tolerances are much higher. For example 100-200ms turn around is entirely acceptable, or even fast, in the SaaS world. While in my previous job that was absurdly slow, and unacceptable.

I think another under appreciated thing is that the scale and complexity is truly larger today than yesteryear. Even in real time graphics, Unreal Engine incorporates literally decades of cutting edge research into a framework that a new game programmer can start running with in hours. There's the various platforms like mobile, PC/Mac/Linux, graphics APIs like DX12, Metal, Vulcan, multiplayer syncing, animation, model importing, etc etc. There's just so many more things to support these days that there's going to be some slippage.

And each of these new tools and platforms were made with the intention of doing something better than a previous one, but then the real world creeps in and here we are.

It would be great to see more effort and research into rethinking from the base level, maybe even hardware, of how we can simplify this entire software stack and have our cake and eat it too.


I don't think inefficiency is his only point.

If it could be written with all the inefficiencies and deliver value faster without bugs, that's one thing. But the problem is, software is consistently buggy, and you run into them every day. It's now hard to notice because it happens so often, we're inured. That's why he challenges you to write down all the bugs you run into everyday, to make yourself more aware. You can see his list here. https://youtu.be/pW-SOdj4Kkk?t=1347

I've also done this exercise, and it's surprising how many you notice when you're looking for it and write it down.

Modern software is terrible in the sense that it's still buggy and hard to use despite access to immense power of modern computers--not merely that it's inefficient.


Jonathan does indeed complain about bugs, but as is typical for this type of programmer he's sure these are somehow being introduced by "Bad" programmers who are incompetent or lazy, not by "Good" programmers like Jonathan.

You get stuff like: "The Jai philosophy is, if you don’t want idiots writing bad code for your project, then don’t hire any idiots." which probably feels good to say to yourself, but of course it's not going to result in fewer bugs.

Jonathan doesn't like Exceptions, which, fine, I agree control flow and error reporting shouldn't be bundled together, but if you're serious about preventing bugs and you don't want exceptions, you need to actually write a lot of error handling, which means error handling needs to be ergonomic in your language. In Jai there might be error flags, you might be expected to check them but if you don't it won't complain because you undoubtedly know best. Pressing on regardless becomes ergonomic like in C.

And so Jai certainly isn't going to do anything about how buggy software is.


> Jai certainly isn't going to do anything about how buggy software is.

Nothing is going to do that, except the competence of the developer.


That's certainly Jonathan's contention. I would argue that it's already obviously wrong.


Sure, but... so what? The market has -- unfortunately -- shown us that it's better to put an inefficient, perpetually-buggy product in front of users, than it is to wait until you've micro-optimized everything and fixed most of the bugs.

The exact same argument we can use to describe why inefficient software tends to win can be used to describe why buggy software tends to win.

Of course users hate bugs. But even more they hate not having software to solve their problems. Hands down I would choose to have something with bugs today, than wait for something bug-free 6 or 12 months from now.

And unfortunately, once you put something with bugs in front of users, you're going to be pushed to work on new features for the next version, rather than fixing the bugs of the previous version. But arguably that's irrelevant! Users would rather have those future new features in 3 months, over having all their bugs fixed in 6 months, and then getting those new features in 12 months.

It still feels incredibly annoying and lame to me that this is the state of things. And I think that's why people like Jonathan Blow (and many, many others) just don't get it. Working software in users' hands trumps everything else. It almost doesn't matter how inefficient or buggy or $OTHER_NEGATIVE_TRAIT it is. If it solves unsolved problems, then it wins.

[Yes, I know, there are limits to what users will tolerate. But most inefficient, buggy software doesn't hit those limits too often. Software that does... well, yep, they tend to fail. But they're quickly replaced by other inefficient, buggy software that users will tolerate. They're not replaced by super-efficient, nearly-bug-free software, because building that software will always take much longer than building the software that users will tolerate. And users want to get shit done, not wait around while you tell them how much better the efficient, nearly-bug-free software will be.


I agree that one major contributor to these problems that he doesn't (fully) address in the talk are the unit economics and market forces that help along these decisions.

He does allude to it about faster software (slow being equated with buggy), and I think there are market indications that for specific markets, fast (efficient software) does win--though it is the exception seemingly, rather than the rule. The rule seems to be quick-to-market software wins.

However, I think his point has a wider scope, and I don't think it's irrelevant. Yes, in the short term, market forces push us to put software in front of people quickly, even if it's buggy. But is that beneficial in the long term? What would be the consequences of making short term optimal choices here?

Would we (as a society) forget how to build the things we rely upon in our society because the complexity is too overwhelming, and at every step, we never went back to clean things up and focus on transferring inter-generational knowledge, because we were too busy putting out features?

Just like in the Innovator's Dilemma, the incumbent at every step makes the optimal decision, but ends up getting trounced because a series of local optimized choices is not a globally optimized choice. And Blow here is saying (strongly), that we're doing something similar with how we build software.

And I'm inclined to agree with him, even though I fully recognize the market forces as being strong. Would we keep making short-term optimal choices because we have no choice? Or is there a way out?

Alan Kay has talked about this in some of his talks. He has some neat ideas, wrapped up in unintelligible slides. I've always thought he needs a graphic designer for his slides. Anyway, his thesis is that we build software today much like how Egyptians built buildings: stacking things wide in order to build high--evident by code bases millions of lines long. We have very few equivalent of arches in software. Once we had arches, we could build higher with less materials.

So perhaps there's a way out. We've been searching for it for a long time (in software years, but we're still a young field), ever since before the Mythical Man Month. And Blow, I think, is saying, there are real consequences to this software mess we find ourselves in, the worst of which is the collapse of civ itself, esp if we don't find a way to reign in the complexity and the bugginess.

Or we can pray for AGI to get done and do it all for us. finger guns


>The goal is always to produce things that users can actually use.

consumer software exists downstream from advancements created by people who care about performance and correctness. You can only do extremely stupid things because someone else did non-stupid things to enable that kind of waste.

That sort of works as long as the wasteful people don't outnumber and outrun the few people who keep everything glued together, but if they don't then at some point this arrangement breaks down.

Maybe let's try to convince developers and consumers to be smarter rather than try to make software dumber. We should take a page out of the semiconductor industry's book which is run by engineers and builds towards engineering and scientific targets, not consumer demands. Those are a byproduct afforded by increases in performance. That's why chips, unlike software, get faster.


I kinda said the same thing in another reply, but why can't we have both? We can have experienced engineers working on low-level perf and correctness, and then we can have normal, less experienced engineers who just need to glue together a CRUD app to save some people some time.

> That's why chips, unlike software, get faster.

The difference is that perf is a feature of chips. It's not necessarily a feature of software.


I think the language you use in this post is telling. Under a certain interpretation it's not exactly incorrect, but that framing doesn't really tell the full story.

Your main dichotomy is people doing stupid things vs people doing non-stupid things. But if you consider that "stupid" only makes sense with respect to a particular set of objectives, you may find that other terms are more suitable for characterizing the general situation.

I think you can roughly describe how many resources should be devoted to performance and correctness* as a function of 1. how many other systems will depend on the system in question, and 2. how much departing from the ideals in performance/correctness will impact individual users.

As both 1 and 2 decrease, an ideal development resource allocator would decrease the amount allocated to performance/correctness.

I would argue that making sacrifices in performance and correctness is not necessarily stupid behavior at all; departing from an ideal resource allocator would be the less smart thing to do.

*Assuming you're working with a resource pool smaller than what would be required to produce an Ideal product, as is typical.


> We should take a page out of the semiconductor industry's book

I see this sentiment by programmers often. Programmers often have a naive/romantic view of what the semiconductor industry is like. If you've experienced how inefficient the meatspace operations alone are regarding what goes on to bring semiconductors into existence, even by meatspace standards, you'd wish you were surrounded by the kinds of peers who were dumping bloaty-by-nature Electron apps on the world. (That's before we even get to the software that those engineering orgs are cobbling together themselves, for a more direct comparison.)


> Maybe let's try to convince developers and consumers to be smarter rather than try to make software dumber.

I don't get this. What would it mean for consumers to be "smarter"? Smarter in what way?


We taught the world how to read a century ago, surely we can teach every human how to navigate interfaces more complex than Twitter? Its just that today we expect interfaces to have zero learning time, that doesn't seem to be a very efficient use of software.

Like, imagine how much better many interfaces could be if we taught basic regular expressions in schools? Things like searching and filtering could be much more expressive everywhere. Regular expressions are so simple that anyone could learn them in a few hours.


> surely we can teach every human how to navigate interfaces more complex than Twitter?

Of course people can learn how to navigate more complex interfaces than Twitter, but why? For what? Certainly not for the purpose of social entertainment like Twitter et. al. What a waste of time that would be!

> Like, imagine how much better many interfaces could be if we taught basic regular expressions in schools?

Ok I've imagined it and I think the answer is "not much better". Pretty much all of us here know regular expressions and we all use interfaces purpose-built for people like us who know regular expressions, but beyond searching and replacing, this isn't a particularly powerful interface element. Like, it's fine and all, it's just not revolutionary. Searching could be made a bit better with widespread understanding and use of regular expressions, but little else would change.

I really think you're underestimating the extent to which interfaces are already really useful to the people using them. We have endless scrolls in social media, which is what people want for the mindless entertainment they're seeking. We have Photoshop for photographers, Ableton for people making music, really good CAD software, tons of good film creation and editing software, the Bloomberg terminal for financiers, really good editors and IDEs for people making software, etc. etc.

Of course there are exceptions, places where there is real room for improvement (I think streaming video interfaces are pretty bad, for instance), but for the most part interfaces are what they are because they serve their purpose well, and not because people are lacking education in their use.


> The goal is always to produce things that users can actually use. And if that means doing hideously inefficient things, like running your app inside a chrome shell, so be it, if that means you delivered value faster.

I guess core issue to me is this value delivered to whom. So far the value I see it is for framework users who could pull a 100K lines CRUD app in one month, of which 99% is just framework glue/code.

However as end user I am getting rather bad user experience, my computer fan is running at full speed, Loading webpages or filling online forms so slow, endless error popups come and disappear and I can't do anything about it. It is not even about efficiency, just plain unreliable/unusable software delivered in name of "empowering" me.

If I weren't software engineer I wouldn't even know that this crap is pulled by software engineers. Non-technical person would simply assume computer is bad/old, internet is slow and so on.


He's also claimed that our computers are 2+ orders of magnitude off the productivity that should be achievable while reaching his goals. It's not so much that all software of old was amazing, it's that we really have lost something when RemeDyBG is 1000x faster than Visual Studio, with Visual Studio having hundreds of years of development effort and RemeDyBG has a couple years of development effort.

His philosophy is that if we built software differently, we could build it faster and it would be faster. The emphasis on garbage collection and borrow checkers and everything else related to memory management misses the point (memory arenas are way better for the vast majority of use cases). The emphasis on Object-Oriented code is awful. The paradigm of RAII is completely counterproductive. There are very specific issues that Blow takes on, not just a general feeling of malaise.


Kinda think that you celebrating being wasteful is proving his point.

In my mind writing performant, somewhat efficient code does nothing but improve the UX, and if you actually care about your users having a good experience using your software then you shouldn't be celebrating wastefulness.


Wasteful is relative. By focusing on optimization you are being wasteful with programmer's time.


Making a better product is not wasteful of anyone's time.


Better for who?

What you consider good and I consider good for a product can, but doesn't have to be the same. And there is an opportunity cost in doing one or the other.


I think it is important to have at least some software as efficient as possible, especially server software, because huge data centers eat a lot of electricity and we cannot always take cheap electricity for granted.

(The recent price spikes in Europe are crazy.)

That said, such improvements may well be canceled out by other activities such as Bitcoin mining.


One of his point is that software engineering techniques did not improve at all, and even regressed, but all of that is/was hidden by the fact that hardware got us a free ride for several decades.

You can use terribly slow scripting and techniques and get something working, not because the tooling is genius, but because the hardware is so incredibly fast.


I think the most steelman version is that he argues efficient software does in fact provide value to users. Whether or not they know it. And perhaps not always in terms of the application they have running right in front of them, but at a systemic or societal level. Little things add up, especially since software has eaten nearly every crumb.


I very much disagree with JBlow because I came from games, I had similar opinions, and then I worked on "real software" (Chrome) and I had my thinking corrected. The problems JBlow and similar game devs are solving are not the same problems being solved elsewhere if you they really tried to solve the *same* problems I strongly believe they'd change their minds.

IIRC one example he once brought up was a text editor written with a small amount of code. Yea sure, I also had a word process that fit in 16k (https://en.wikipedia.org/wiki/AtariWriter) but we're not on ASCII based computers with only text mode. Just a full unicode font takes 10s of megs of space. We're asking our software to draw fonts in any size, in several styles, in any language, and with emoji. That alone would break most of his assumptions because you can't just put them all in memory as pre-rendered glyphs, your user's don't have enough memory. Yet in games, most game devs would just basically say "F-that, you only get ASCII" or "F that, we're only supporting EFIGS + Katanana" and we only need 1 style.

Two good articles on the issues of text that games never deal with

https://gankra.github.io/blah/text-hates-you/

https://lord.io/text-editing-hates-you-too/

The point being, it's the same all the way down in all areas. Game networking is not nearly as complicated as say the networking in a browser, that has to be able to run N video streams + M audio streams + X WebRTC video chats + Y Websockets, and video capture + audio capture and sending that data over the net, etc vs some game which if it ever plays video (many don't) only ever plays one video at a time, in known sizes and formats, and usually not streamed over the net but read directly from storage, etc...

Or take just displaying images. Most games know up front how many images will be displayed. You might even give your artists a budget, no area of the game can use more than XX megs of image data. A browser can't run with those limits. Go look at imgur.com or scrolldit.com or pinterest. Situations games never deal with.

And on top that there's just getting shit done. If I can write an app in a high-level system and ship on iOS, Android, Windows, Mac, iPadOS, I'm going to. I don't care that it's more bloated than if I hand coded assembly language for each one and neither do my customers. Sure they care if my app is buggy or the UX sucks but that's got nothing to do with whether I squeezed out every last byte and cycle out out of it. HNers like to rant about apps that use some kind of less than native framework but nearly all them are using well made apps that use those frameworks, they're just not aware of it because the people making them did a good job.


I do not want emoji on my computer, and I hate Unicode. I also think that bitmap fonts are less fuzzy, and can be better at the proper size.

I write software avoiding Unicode as much as possible; if you need international text then you can specify which code page numbers to use. But, that still would be possibility of text directions, Chinese/Japanese (without Han unification), etc, but using a simpler way.

(Even, this is as someone writing a game engine, I add possibility to specify code page numbers, and use fonts for those code pages. Currently only 8-bit characters but in future possibly also EUC (and/or other multibyte character sets) so that Chinese/Japanese text, etc is also possible.)

GTK is terrible (I prefer Xaw). Use of Unicode is one problem, but then although there is a menu to insert Unicode control characters, they are not visible even while editing (making it difficult to edit), and I also do not like the use of kerning during editing.


> I do not want emoji on my computer, and I hate Unicode.

Ok well good for you but that’s akin to saying “I don’t want other languages than English on my computer and I hate the color Green”. It’s completely irrelevant what you prefer, the point of general computing is that it’s not for you, it’s for everybody. If you are writing software for everybody then I’m sorry but xenophobic assumptions just don’t hold.

Because really, what’s hidden in the meaning behind your post is “I don’t want people who speak languages other than English to be able to communicate as easily as me”. I’m sure that’s not your intent but it is the direct result of your actions.


It is not true. I do want people who write in other languages to be able to communicate (better than me, not worse), using better encodings than Unicode. I do intend to support other languages writing, including international text (text directions, multibyte encodings, etc). However, Unicode is too messy and isn't good (and has problems such as the Han unification, ambiguous widths, changes in versions of Unicode which complicate the specification, etc). Some people think that TRON code would be better, although TRON has its own problems (for example, its design means that character sets other than JIS don't fit properly), even though there are some advantages, especially for writing in Japanese. My opinion is that it is better to not use a single character set for everything; instead, different ones are useful for different purposes.

(And if needed, program to convert encodings is possible, although this can result in incorrect characters sometimes, depending on the context. It is an approximation if you have no other choice, but it would be better to use proper fonts and text handling for the appropriate code pages.)


Think really carefully about it though. Going back to separate charsets for everything would mean losing the ability to communicate multiple languages as part of the same text.

This might not matter to you, but it matters to… for example… anybody learning a language. Например, я провел последние месяцы, изучая русский язык. Как бы я написал это, если бы мой пост был в ASCII?


I have thought about it, and considered many things, and I had concluded that Unicode isn't better. Different applications require different capabilities, and Unicode has problems with mixing languages (including Han unification, and others such as Turkish letter "I"), ambiguous widths, character equivalences, homoglyphs, and other problems. (Which things are desired and undesired, and which things are missing, or are incorrect for the specific use, depends on the application.)

You can still include, in some formats (such as document formats), codes for code page switching. This has some advantages including Chinese/Japanese together in the same document, as well as being a cleaner way to implement mixed text directions, etc.

Also, sometimes the character set does not have all of the Chinese characters. Cangjie can make many more possible Chinese characters, but maybe an extended with up to six parts might allow more combinations; I do not know for sure.

And these considerations are not even close to everything. Different sets of encodings are useful in different uses, I think. TRON code has its own advantages and disadvantages compared with Unicode, but I think that switching encodings also has some advantages (and also disadvantages), too. (There are also some things that I do not know of TRON, due to the documentation being difficult to find.)

One idea that I have seen is having a separate language and glyph field, to allow displaying approximation of characters that you do not have in your computer. Whether or not this is appropriate depends on what you are making, though. I had thought how to do something similar with a "output-only encoding" which has several fields and is also linked to the text in originally whatever encoding it might be, which also allows approximated display of fonts that you do not have in your computer, but it is only used for display and only as a fallback mechanism.

(My opinion is that mixing text directions within a paragraph should not use line breaking in the middle of a quotation that is not in that paragraph's main text direction (for text in the opposite direction, only short quotations should be inline); use a block quotation instead if needed.)

Maybe, I should need to make up ICNU (International Components for Non-Unicode), to properly deal with international text. I have dealt with such things so I know what some of the considerations are, in making an interface with the needed capabilities. (One capability I intend to include is possibility of using TRON character codes too, although you can also use other character sets, including Unicode in some cases (such as existing files and fonts that are using Unicode). However, usually extra data files would be needed for many purposes, but you can specify how to find these files. Furthermore, the ability to disable specified encodings (especially Unicode) also is important.)

My own designs of programs, file formats, protocols, etc are avoiding Unicode as much as possible, since I think that other way is better. However, in some cases you can specify any code page number anyways, so you can still specify code page 1209 for UTF-8 if wanted. In other cases, this does not work (since it is not appropriate for that specific use).


> write software avoiding Unicode as much as possible; if you need international text then you can specify which code page numbers to use.

For the sake of humanity, please don’t say we should consider bringing back code pages. My European language alone has 3 different ones (or was it 4?), and you can still see some places where code pages keep causing problems posthumously because they mojibake quite readily into other encodings and screw everything up. I was an infant when codepages were all the rage, and it’s still causing problems to this day even though I never had to contend with them in day to day use.

Unicode is not perfect, but it’s here and it provides a unified character namespace for the whole planet. That’s certainly something I don’t want to leave behind.


And the new James Webb telescope runs Javascript! It must have power and CPU cycles to burn.


I'm genuinely curious why you think they chose to have it run javascript.

I just spent a few minutes looking into their actual usage of it and, ime, it has all the indications of being a reasonable decision.

Here's a bit more info from [0]:

> It is my opinion that NASA settled on the Nombas engine, after extensive research of their own into all the options available at the time, not so much because of the JavaScript language itself (although the familiarity of the language was a bonus), but because of the solid and robust nature of the ScriptEase engine and the tools that went with it.

[0]: http://brent-noorda.com/nombas/us/index.htm


Can't wish away the power, memory, and speed issues JavaScript has, meaning the telescope must have all three to burn.

I've written a Javascript engine back in 2000. I came away from that with a distaste for dynamic typing. It seems easier to write code with dynamic typing, but I wouldn't give it many points for robustness. JS has many odd behaviors if an unexpected type was passed to a seemingly simple function.


You've sidestepped my question though :) Why do you think NASA made the decision to use it? Is it not possible there were legitimate reasons?

The implications you're suggesting about power, memory, and speed, or even predictability in semantics, depend on the particular language spec they implemented (it was not standard ecmascript), how they implemented, and how that implementation was used within the larger system.

It seems to me like just knowing it was javascript (for some definition of javascript) is not enough.


> Why do you think NASA made the decision to use it?

Probably because they couldn't find low level programmers, it isn't easy to find people with good professional experience in C today, especially if you pay NASA wages.


This is certainly not the case. For more details do a little exploring in the link I provided above. For one, the bulk of their code is low-level, just not this particular system where it wouldn't have made sense; for two, they did this in 2002 when it was almost certainly easier to find C programmers than javascript programmers; for three, the people writing this javascript were domain experts, not professional programmers.


Some people would kill for the ability to put a 1/2-year NASA stint on the resume, if for no other reason than the idea that they contributed to the scientific research and exploration of space.


Javascript in those days was far slower and needed much more memory and power than native languages. Nothing in your link suggested otherwise.


I'm not entirely sure JavaScript itself is horribly worse option than anything else available.

Now the ecosystem currently build around it...


> But running software that fit in kilobytes of RAM was never the goal of software engineering. The goal was providing value to your users.

By this definition of the goal you are indeed obviously right. So it's pretty clear that Blow and others don't agree with this goal. I'll try to map out two different povs on the goal of engineering: imho the goal you are describing, talking about "value" etc is associated with a world-view based of economic exchanges, on the idea of markets. In this world-view, engineering will have as goal to sell something to a third-party and be profitable to you, thus you will minimize your costs and do the minimum you can get away with. This is the pov of current hegemonic the economic sphere.

What Blow defends, ie an engineering goal of using minimal resources for a given concrete objective, is much more in line with a world-view based on commons, on basic infrastructure for common needs. In this world, the most natural setting for doing the engineering work is not one of economic exchange, but one of cooperative, where the people that need the thing get organized somehow to get the thing done. In this setting, it is clear that the goal is not to do the minimal thing they will be ok with, the goal is to create a durable tool that will be practical to use and use minimal resources. In particular the investment (in human planning) can be very high, it doesn't really matter. This world is also close to the concepts of lowtech (basically favoring simplicity and robustness).

> Yes, some users do care, and I think that a disproportionate majority of them hang out on Hacker News. :P

A disproportionate majority of what? This is a classical hn fallacy: yes apparently hn likes to defend things being done "nicely" (at a good technical/resource tradeoff point) and not just "well enough" (at a short term profitable tradeoff point). But the silent majority that doesn't care you are talking about is very much restricted to the people that can afford paying for ever changing computing devices. Old people (cannot change fast), young people (get old family devices), people in less geopolitically gifted areas (less globalized purchase power), .. they all have almost obsolete technologies because they are constrained in several ways. You are talking about the majority of people that are visible to your economic world, but a lot of people are not inside of it, or on its outskirts (and if you say "but the economic sphere must be more inclusive and expand" then you are again pushing a particular view, which is not the only one and is debatable).

One can also make the case that with the current state of affairs regarding fossil energy and limits to growth, going further more and more people are going to have these big constraints. This second view is very popular with people thinking seriously about ecology and crisis.


> Commercial software is optimized with respect to a set of business goals

I'm not sure that's entirely true. It ostensibly is, but the way it is written is not optimized for long-term business goals. Because the way software tools work and the way software is written is that it is all completely optimized for short-term business goals and results.

I'm trying to pen an article that addresses this, because software is often if not solely viewed as just doing something, when that's only a component of what it does. If we additionally view it as a way to communicate amongst humans and a way to think about and encode a domain, then we begin to see the failings of software in today's world. In this holistic viewpoint, software is best viewed as stored and interactive knowledge. If software is simply viewed as a way to do something, then the stored knowledge becomes highly implicit and incomplete, and a communication breakdown happens. Viewing things with this lens, it is no surprise why today's software is primarily towers of shit. There is no handling of variety, that is complexity, because the way to handle it is to properly communicate within the medium of software systems and encode the domain. Otherwise, software just doing something often creates more complexity than it reduces.


I think it's an interesting thesis, and I like the point about using it to encode domains represented through it (I agree if you want to build non-overcomplicated systems, the first major factor is to minimize impedance mismatch with the domain it represents in the base structures you build on, i.e. in the 'language' you develop within your codebase to eventually express domain logic through.)

So while I have some issues with this as a generalization (because I think the reality is more diverse):

> Because the way software tools work and the way software is written is that it is all completely optimized for short-term business goals and results.

—let's assume it now for the sake of argument. What's still weak in the thesis imo is the connection from your proposed better way of approaching software development to longer-term business goals. For instance, I think one possibility is that the typical state of affairs, over a longer time-span, is that what the software itself should even be, what it needs to accomplish, which systems it interacts with etc., is volatile enough that in most cases (there are very important exceptions of course), businesses would do better long term by optimizing for flexibility in changing even the spec for the software, vs devoting more resources to ensuring any individual representation of a software concept is particularly durable.

What I'd want to know next on the matter is what evidence exists for the likelihood of one stance over the other being correct. Are there examples of extraordinary long-term success from companies whose approach to development went in the direction of durability (I'll say as an approximation)? Is it a common occurrence for companies to succeed short-term by making sacrifices in quality, and then to fail longer-term specifically because of their over-complex / incoherent, buggy software (or is that more of an exceptional case)?


This is an interesting viewpoint, but I don't understand what this has to do with "today's" software. Nor do I understand why you think it's primarily towers of shit.

I don't think we have great ways of encoding domain knowledge, let alone turn it into software (and I wonder if we ever will). Encoding domain knowledge is mostly about human to human communication, and there is some value in using programs as a communication medium, but I think transferring and extending domain knowledge is primarily a human process (meetings, documentation, strategy documents, etc..) that software would actually slow down.

I honestly find today's software quite fantastic. I don't know what tree Jonathan Blow is climbing, but compared to the 80ies, it is unbelievably easy to find good code, good books, just an infinity of resources on every aspect of software you would like to target, from the intricacies of whatever pipelines your CPU has to large scale distributed systems to pragmatic product thinking in software architecture. When I started programming, I was lucky to find Abrash's book about performance, and... that was it. Nowadays, not only are there a million videos on youtube, but I can literally look at the internals of most advanced optimizing compilers under the sun, and have direct access to all the published academic research on the same topics.

In fact, the reason so many people can just grab javascript and create a full webapp in a couple of days is because so much amazing work is being done on low-level optimization. Not only that, but proper software engineering technique that were hit or miss even just 10 years ago are common in even the smallest javascript projects (source control, versioning, CICD, unit tests, automatic linting and analysis, packaging, often solid collaborative development guidelines).

I regularly switch between counting cycles in baremetal embedded and writing e-commerce javascript and PHP applications. In embedded my metric is writing tight software (because tight means cheaper chip), in the other, my metric is to make a retail business successful, and software is but a tiny part of that. 1I literally do not care that my whatever analytics job running overnight takes 2h when I could optimize it down to 30 seconds, because as funny as it sounds, tweaking the CSS of the email banner or handling proper retina ad-banners is a much more relevant problem.

Seeing the amount of talented and dedicated people working in software, I can't but think of Jonathan Blow as someone who is deeply incurious and small-minded: he cannot fathom what richness is out there, and assumes that other people wouldn't be able to leverage said richness either.

I'm glad he is a successful indie game developer, but that is not the high horse to ride on he thinks it is.


This is a fascinating point and it really makes me think of the mechanical analogy: different mechanical engineers can provide different levels of tolerance (i.e. precision), depending on their training and experience. You wouldn't hire an F1 engineer to design retail automobile machining, because both the hire and the machining process would be overkill.

The software industry is stuck in this rut where everyone acts as if they're designing F1 cars, but few teams outside of FAANG actually are. In other words, for the majority of software jobs, a bootcamp and CRUD experience is likely enough.


> You wouldn't hire an F1 engineer to design retail automobile machining, because both the hire and the machining process would be overkill.

Not only overkill, but detrimental to the goals of the retail market. The tolerances in an F1 car are so tight that the engine needs warm oil through it before it can even be started. Great for maximum performance, not great for the driver that wants to just hop in the car, turn the key, and go.


I think most software development is already internal, line-of-business stuff at companies HN largely ignores. The level of tech used is quite boring.

Ask someone with LinkedIn Recruiter to run you through a broad search of developers mentioning stuff in-fashion for bigcorp internal teams like Java or .Net - you'll see so a TON of profiles for stuff that has nothing to do with FAANG or FAANG-like tech stacks.


I'm sure the F1 engineer is capable enough to adapt to different requirements.


Why are you so confident? Jonathan Blow can’t seem to adapt.


I didn't know Jonathan Blow designed F1 engines.


If this is the case then it means that there is no competition in the ecosystem and that the regulators need to start busting monopolies and other anticompetitive behavior.

In a competitive market where there is actually free entry and free exit and consumers are not locked in there should be no such thing as diminishing returns at the level of brokenness we see here, unless we somehow think that writing functioning software is somehow impossibly difficult.


Do you have any fields in mind where people are picking "perfect at higher cost" over "cheap and crappy but adequate" en masse? I have trouble thinking of any, and "lack of competition" hardly seems to be the reason. "Race to the bottom" isn't something that happens without competition, after all.


I'm oversimplifying a bit here, but I think the strategy that most major software firms are following right now would match the "adaptive radiation" phase of ecosystem evolution. There are so many empty niches as a result of the high dimensional space of user requirements (where say, 2 additional features are sufficient to differentiate a product, see e.g. discussions about slack vs discord vs teams from today) that companies hardly need directly compete with each other. This would be a virtuous explanation for lack of competition.

Another candidate explanation for what is going on in your "race to the bottom" likely has to do more with signal to noise issues and other types of adverse selection where e.g. a company needs to spend only just enough money for a product to make it past the average review date, at which point the device/whatever will fail. These literally marginal devices/software/whatever are then sold by literally copying and pasting the specs from some other product that is selling well and now consumers are toast. In the software world this looks like a billion clones of the same game, etc. This however is not what I would call good faith competition, and in biological systems species that do this eventually get wiped away completely when they finish degrading the ecosystem they are in. This will look like a mobile gaming crash (about which there seem to be some google hits for back at the start of August).

The kinds of things I'm thinking about are e.g. the fact that text selection in HTML documents is SO BAD that people have started to add copy buttons. Want to copy a url out of some calendaring software? OOPS NO you actually wanted to select literally all the other text on the page right? There are what 2.5ish major browser vendors (supposedly) now? That is by design and a sign of anticompetitive behavior.

Another example would be the GTK+ file picker. Though in this case it is far more likely that the desktop linux environment is extremely marginal in terms of capital than anticompetitive behavior on the part of redhat.


> The kinds of things I'm thinking about are e.g. the fact that text selection in HTML documents is SO BAD that people have started to add copy buttons.

What are you referring to?


The top level structure of this argument is: the argument I gave above can only be correct if: all the companies producing software at the present typical quality level are operating as monopolies.

You support that stance by saying the only way to account for the present level of software quality is if it were the case that no competition existed for the companies producing the software.

So the argument would be invalidated if there were another way of accounting for it, which there is: it's possible that the present typical level of software quality is not bad enough for it to require that the companies producing it have no competitors.

So to make this argument compelling you'd need something to establish that the typical level of quality actually is bad enough to create that circumstance. Otherwise the argument is more simply resolved by the possibility that your perception of where typical software quality is, may itself be incorrect.


I think there is an additional possibility, which is that the software quality is bad enough, but there is no where else for users to go.

Since proving that directly would require a viable competitor to already exist we are in a bind.

However, we might be able to find some examples where there are viable competitors, or we might be able to find a population of users who have enough experience with similar pieces of software to be able to state which one has fewer bugs (at least for their usual workflows).

One category of software that comes to mind for me would be 3d modelling and rendering software. Perhaps examining the impact that Blender has had on quality would be a place to start?

edit:

I do think that the activation energy (capital) required to get competitors off the ground at this stage is enormous, but that is in part because many of the alternatives have been allowed to die out or be acquired by their direct competitors. Had they been kept alive an independent there would not be a need for so much capital to get back in the game. Sadly, now, many years on, the remaining systems have acquired enough complexity that they can fend of most competition without much effort.


I would argue that the decline in software quality is related to the rise of SaaS coupled with Product Manager-driven development. If a product is generating consistent revenue, resources are devoted to expanding that revenue. It's extraordinarily difficult to justify bugfixes and incremental improvements using exclusively analytics data, so organizations are generally not incentivized to reward that kind of activity.


Do you think there's no such thing as a number of bugs that would cause customers to leave more rapidly?

I don't see a big difference between what you're saying and the person you're responding to: until the level of bugs reaches the point where it hits revenue, it doesn't make sense to invest in fixing them. There's a tolerance for some level of imperfection - in fact, expecting perfection is itself somewhat irrational here since it's not like other industries routinely deeliver perfection.

I'm also not convinced we're seeing an overall decline in software quality. There was a LOT of shit software from the 80s and 90s I don't want to go back to.

EDIT: It's a bit of a Yogi Berra - "modern software is terrible, people just shovel out crap since the customers don't want to wait!"


> It's extraordinarily difficult to justify bugfixes and incremental improvements using exclusively analytics data

Why would analytics be any less capable of conveying how these attributes of a software product affect customer purchasing decisions? If the metrics relate to e.g. retention, new users etc., these are independent of which aspects of the software cause them. A correlation is a correlation. Unless there is some reason to think product managers would be especially unaware of quality as a variable to observe.


> The present state of software is precisely in the neighborhood of where that added utility begins rapidly diminishing.

I can think of several commercial software offerings where quality is and always has been lacking. Quickbooks is one example, it has been extremely buggy the entire time it has been a product. Yet, lots of people still buy it and use it.


As far as I can tell, 'software quality' these days is pretty good, all things considered, as compared to twenty or thirty years ago.


I quite like Jonathan Blow. I think the talk mentioned in this post is kind of poor though, I remember watching it a couple of years ago in its full context (https://www.youtube.com/watch?v=ZSRHeXYDLko). IIRC it had a bit of a "kids these days" / "they don't make 'em like they used to" feel to it. It's not untrue, but it doesn't really explain anything.

IMO, a MUCH BETTER version of a similar point was outlined by Bert Hubert in this article:

https://berthub.eu/articles/posts/how-tech-loses-out/ - How Tech Loses Out over at Companies, Countries and Continents (Video version: https://www.youtube.com/watch?v=PQccNdwm8Tw)

In short, this goes beyond software; it's in general about why the incentives around specialization and outsourcing/offshoring cause innovation and expertise to die out. Bert Hubert takes the example of toasters, but it's of course very generalizable:

> The problem is that this is not just a toaster problem. This is a continental problem. All over Europe, this is happening simultaneously, where we’re saying, look, we’re not that much into actually building things anymore.

> So we’re just getting everyone else to build stuff for us. [...] We’re just thinking about things and then telling some other people how to do their stuff.

> In the end, you cannot survive if all you create is intellectual property.


> In the end, you cannot survive if all you create is intellectual property.

Why not?

Anyone can do. Doing the right stuff is what matters.

If you're unwilling to do (Europe, generally), and you can't even come up with good things that need done anymore, that seems like a problem. I don't think Europe is there.

I would argue it's a bigger problem to be focused too much on doing, and not focused enough on direction.

You can probably make a convincing argument that direction is random, and no one is good at setting it, but people at least believe other people can do this - and IP / ideas are currently where the money is.

Like, is Apple's IP actually valuable? Who cares. It makes a ton of money.

Anyone can put together their phones.


The complication isn't in the "anyone can do it" it's the more meaty subject of intellectual property and in general globalism.

If you get your people in your country/continent to build your stuff you have effectively a revolving door of people whose skills are available to you, laws are amicable and familiar, and their taxes go to improving the land you built on.

If you outsource things you may get it cheaper (labor arbitrage) while maintaining high profit margins on your IP. The problem comes when that country goes through an industrial revolution, or discusses worker rights, or gets into a shouting match with your country. Now, you may be forced to get labor in your country at which point you'll start from the bottom. Worse yet, you've given those other countries your IP. IP laws are not international. Once your country becomes enemies of theirs they will certainly begin building your stuff to the exact same quality specifications you had them build it to.

We see this quite frequently with China. It is no coincidence a new product releases in the west, and after a short time, the exact same product with the same pieces under a different name is found on Alibaba. Long term, this is not a survivable situation. IP is effectively worthless if you don't control the labor that can build it. We do this typically through patents and trademarks. Once you lose control over that you lose control over your IP. No amount of trying to take another country to court will save you.


> IP laws are not international. Once your country becomes enemies of theirs they will certainly begin building your stuff to the exact same quality specifications you had them build it to.

A lot of IP value is actually the brand.

Anyone can steal Pixar's technology and make a 3d film. People aren't going to watch it.

Anyone can steal Apple's designs (although I doubt they can realistically get the entire supply chain). No one's paying top dollar for a knock-off iPhone.


> Why not?

You're replying to a snippet of a fairly significant and long talk. If it sparked your interest, I invite you to read or watch the full thing.

He answers the "why" later down, but "not surviving" is not really the important part at all. It's about how we globally lose out on innovation, ingenuity and new ideas, by not fostering the right environment for creativity when we outsource everything.

Really, Bert Hubert is fantastic, and this article/talk should be a required reading for all startup founders and entrepreneurs in general.


>Anyone can put together their phones.

Really? The supply chain for a phone is huge. All of the subassemblies have to meet in time and space with production facilities up to the task.

Making software is easy. Each additional copy has almost zero net cost to produce. Actual production has complexity you can't imagine.


> Making software is easy. Each additional copy has almost zero net cost to produce.

When people talk about "making software" - they are not talking about making copies of code. They are talking about writing code, which obviously costs money.


The reason I bought it up was to point out the assumption that programmers have ingrained in us, that production is trivial.

Setting up a production line is just as tricky as anything you've seen while debugging.

We programmers are incredibly privileged.


As someone who has walked those production lines, and helped coordinate things around them, I agree. But I don't think it's quite as difficult as you'd like to portray. It's difficult in a place like the United States, because it costs money that can be more efficiently spent elsewhere (for now!). But it's not that difficult.

Yes, it's more difficult (from a technological and human coordination standpoint) than a relative few people sitting down and spending months writing a new software package. But I don't think there's this orders-of-magnitude gulf here, and we should also account for the mental/intellectual costs. It is much more mentally taxing to build a non-trivial software package than it is to set up a hardware supply chain and manufacturing facility. That's not to say that the latter is easy, but it's not particularly intellectually challenging, at least not in a comparable way.


Programming and setting up production are simultaneously complex and complicated. Once you get past a certain level of precision or scale, management in either case becomes more of an art form than a science.

It's possible to build a manufacturing line that results in humans on the moon, but that line also depends on software. In then end there is no pure production chain any more. Margaret Hamilton saw to that. 8)


Anders Hejlsberg explains in the link below that for decades much of computer science was concerned with strict typing as a tool to ensure program correctness. As the consequences of incorrect code have declined - you're unlikely to crash the browser much less the machine with bad JavaScript - the opportunities to explore flexible and more expressive typing have increased. I know younger programmers who don't even bother to check for errors. This is shocking to me as an older programmer. I wonder if this is a factor for the decline in software quality Jonathan Blow talks about in his video. If so, we need to find a way to encourage programmers not to be lazy about errors. The Blue Screen of Death used to be all the encouragement I needed.

Typescript - Type System explained from Anders Hejlsberg - creator of Typescript

https://www.youtube.com/watch?v=AvV3GIDeLfo


> I know younger programmers who don't even bother to check for errors

The older programmers of today were once young, and many of them didn't bother to check for errors back then.


They did if they were C/C++ programmers because everyone would know if they didn't when the screen went sad. Maybe Java programmers. I don't know, I never touched the stuff.


No, they didn't, they just ignored the errors and most of the time they got away with it.

That's what the Hello, World! error illustrates. The canonical "Hello, World!" in these languages actually ignores errors, and nobody cares. But it speaks to the general mindset behind C and C++ programmers.

"Eh, it's fine". Until it isn't.


You're joking right? There's zero, zilch, nada need to check the error return from a Hello, World program. Hello World is a standard way to show you can compile and run a program. If you can't do Hello World, you can't do anything else. It's a good joke though. I'll probably steal it.


When was the last time you've seen someone check the return value of printf?


And what is your program supposed to do when printf fails? Print an error?


Perhaps "return 1", at least. If writing to the console fails for some reason, when the entire purpose of the program is to write something to the console, the program shouldn't exit with a status of zero. Arguably a "correct" hello world in C is:

    #include <stdio.h>
    #include <string.h>

    #define HI "Hello, world\n"

    int main() {
        if (printf(HI) != strlen(HI)) {
            return 1;
        }
        return 0;
    }
(And I'm sure astute readers could even find fault with this implementation.)

Obviously this is tongue-in-cheek; I would never write code that checks the output of printf(), even if perhaps, in some abstract way, I "should". If we were to believe that the purpose of a "hello world" program was to teach about error handling, then maybe we should write "hello world" this way. But I'm also not sure that's its purpose.


Where is the Rust Hello World example on the manual teaching new generations to check for print errors?


I don't really understand this question. We're talking here about the canonical "Hello, World!" program in each language.

In C that program is written in K&R. In Rust (like several modern languages) that program is provided automatically as a placeholder when you start a new project.

The C program, as I explained, just presses on anyway if there's an error. For example if the standard output on your Unix terminal leads to the full device, that's ignored and the program exits with a success indication.

The Rust program, of course, will panic, and the program exits with a failure indication (if your standard error leads to somewhere the panic report will be sent there unless you've asked for panic to abort)

It's a difference of convention. The convention in Rust is that we should care whether things fail, and if we don't know what to do about them failing we should panic.


> I wonder if this is a factor for the decline in software quality Jonathan Blow talks about in his video.

Wouldn't hold my breath; I recall a tangent on programmers being "afraid of pointers". Depending on your favourite not-C-pointer way of managing memory, that's saying that use-after-free bugs are a sign of bad programmers, or shunning substructural/region/etc types for verifying that explicit memory management is correct.


> I recall a tangent on programmers being "afraid of pointers"

I've seen that too... It's pretty funny because in so many modern languages every reference is just a pointer we can't do arithmetic with.


I think when people say they are "afraid of pointers", they really mean they're afraid of manipulating pointers... that is, as you say, doing pointer math.


No one has stopped studying strict types, they are alive and well


No one said that. Obviously software that can bring down the entire machine is still being written, but by highly skilled specialists. Meanwhile we've grown an army of coding bootcamp programmers churning out all the software that is eating the world (™ Marc Andreessen)


That's because the value of them churning out that software, however bad, so far has been greater than the alternative (none of it existing). I'd imagine this will hit a breaking point however (maybe it already is) due to hardware not advancing as rapidly. But who knows, maybe we are on the verge of another breakthrough and don't even know it.

But once we hit the wall, things will shift.


His style in unconventional to say the least, and you can straw man his talks in hundreds different ways, nitpicking the numerous weaknesses.

But, in my opinion, that would be missing the point.

The point is not on being rock-solid, I find this type of risky rants refreshing and thought provoking, and there are many good/interesting things there.


Related:

Johnathan Blow on Software and Preventing the Collapse of Civilization - https://news.ycombinator.com/item?id=32080927 - July 2022 (1 comment)

Jonathan Blow – Preventing the Collapse of Civilization - https://news.ycombinator.com/item?id=25788317 - Jan 2021 (94 comments)

Preventing the Collapse of Civilization / Jonathan Blow [video] - https://news.ycombinator.com/item?id=24662990 - Oct 2020 (4 comments)

Jonathan Blow – Preventing the Collapse of Civilization - https://news.ycombinator.com/item?id=19965908 - May 2019 (2 comments)

Jonathan Blow – Preventing the Collapse of Civilization [video] - https://news.ycombinator.com/item?id=19950860 - May 2019 (2 comments)

Preventing the Collapse of Civilization [video] - https://news.ycombinator.com/item?id=19945452 - May 2019 (116 comments)


I really can't believe how much of an arrogant blowhard (pun intended) he is sometimes. 95% of the things coming out of his mouth are negative, cynical, and self aggrandizing. He has some good points, sure, but I really just can't get past his personality.


That was kind of my reaction as well, though perhaps not quite so negative.

It's fine that he has these sorts of opinions. I might have some opinions about this topic, and I might write a blog post or something about it. But I would expect others to tear my arguments apart, because I -- just like Jonathan Blow -- don't really have the expertise or background to properly evaluate something like this.

In reality, though, my opinion would barely be noticed, because I'm just some random person on the internet. Blow is a fairly well-recognized figure, but that doesn't mean we should give greater weight to his opinions on things that are well outside his expertise to evaluate.


So...the typical Hacker News user?


honestly yeah lmao


Part of me wants to write a point by point rebuttal of this, but I’m not sure it would be useful. There are many, many assumptions in this work that would have to be teased apart before the good ideas here could be discussed on their own.

I’ll limit myself to a comment about his comment regarding the end of the Soviet Union, quoting from a long essay I wrote last year, comparing the crisis in the Soviet Union to the global crisis:

————

If you think two things are completely separate, and yet they change at the same time and in the same way, then you need stop and ask yourself why were you so confident that they were completely separate?

https://demodexio.substack.com/p/the-struggle-to-save-the-so...


As someone with more than a passing interest in Societal Collapse, but without time to watch the video right now, how does Blow's talk stack up against say The Great Simplification series by Nate Hagens [1], Joseph Tainter [2] or any of the many authors contributing to this field as referenced in this Nodes of Persisting Complexity paper [3]?

[1] https://www.thegreatsimplification.com/episodes

[2] https://scholar.google.com/citations?user=sbEDS84AAAAJ

[3] https://www.mdpi.com/2071-1050/13/15/8161/htm


What's the answer? The post tapered off with a long quote, unless I'm missing something by having js turned off..


I didn’t care too much for the article, but it made me think. I think about civilizational collapse quite a bit, and mostly due to loving history and strategy games.

A large collapse could happen, but as far as I can reason it out, it won’t be anything close to the scale of collapse that the Bronze Age collapse was. There are multiple reasons for this.

First, at the end of the Bronze Age, writing existed but it wasn’t everywhere all the time. Writing is now everywhere all the time, and most humans can read and write. This means that, in a collapse, most people can carry on knowledge. The level of death required to create a true dark age (like that following BAC or the fall of Rome) would be on such a scale that I cannot think of something other than global thermo-nuclear war causing it.

Second, even in the Bronze Age natural disasters weren’t enough. You had to have disasters plus invasion by the sea people. That wouldn’t work well now since surveillance is good. Any nation would now if masses of humans were on their way to attack.

Last thing, if software had gotten worse, I figure my computer wouldn’t start at all considering I had to reboot computers just as part of normal usage through the 80s and 90s. I also had to obsessively save everything all the time because a random crash was normal. Take off the nostalgia goggles.


> People will complain all day long about the total incompetence or even criminal nature of every human government in recorded history, but when it comes to the prospect of an entity with hypothetically superior intelligence taking over, they want to prevent it!

This reductive drivel at the very end of the post nullified any credibility the author may have had.


The apparent claim that he is the only one who's put any effort into measuring world GDP also stretched believably.


I can't actually find that statement in the video but if he said that, I'd say it's brilliant.

The thing about the "AI safety" nerds is ... if some vast corporate entity were to create a truly god-like computer program, whether the program escapes the control of its masters is a problem for the corporate entity but just fact of thing being created and being in the hands of this entity would be the problem everyone else would face.

And more broadly, if we haven't done much about the general sickness of society, what chance should we expect against a final problem of this beyond-human-control AI.

Bad "reductionism" points toward too simplistic solution. Good "reductionism" debunks bad but supposedly-sophisticated solutions. This is Good reductionism imo.


This hits close to home.

On one hand, I completely agree with Jonathan.

I started my career 20 years ago in games dev and digital advertisement which were strangely close to each other at that time. When I work with developers today who need an external library to check if a number is odd it's terribly frustrating. There are certain techniques like working with binary formats, that were taken for granted even at junior level when I started that seems like a rocket science these days.

When I share my gripes with fellow managers they often joke if I would be a developer on their team and came up with one of my "optimal" ideas they would fire me on the spot, because such solution would be too convoluted and obscure for todays standards and not maintainable.

On the other hand I completely understand that the software industry shifted a lot since I started and now every bussiness is an IT company. This creates a lot of demand on the market and someone has to push these deliverables somehow. And despite what business stake holders will tell you, world won't collapse if these applications cracks in few spots if you'll be able to ship them before competition. People will still continue to use LinkedIn even if it takes 7 seconds to gain interactive on my phone.

Jonathan comes from a different background and his views may seem radical for someone working in an ordinary IT company, but even the game dev industry shifted in that direction. Back when games were released on CDs and you couldn't assume that every user will have a constant access to the internet it was absolutely crucial to make sure that the game will work on day one. But that's no longer the case. Companies like CD Project proved that you can ship a broken build on day one and keep releasing patches months and months after premiere.

I mean, it's embarrassing that GTA V required 5 minutes to parse a simple JSON file for 7 years, but the game sold in 170 million copies regardless.


> I mean, it's embarrassing that GTA V required 5 minutes to parse a simple JSON file for 7 years, but the game sold in 170 million copies regardless.

It's because the game was good enough in spite of the tedious load times. I feel like devs often forget the fact that the computer is a black box to 99.9% of users. They don't care about how fast the game should parse JSON, they don't even know what JSON is. They only care about total load time relative to the enjoyment they get from the game.


Right, and I think that was the grandparent's point: a particular aspect of a piece of software being "embarrassing" usually has zero to do with how successful that software will be.

If figuring out that silly JSON parsing issue would have delayed the game's release by even one day, that would have caused problems for people. And clearly, not bothering to find the issue for years (and I believe it was eventually solved by a non-employee, even?) did not reduce the popularity of the game in any noticeable way that anyone cared about.


> “The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room.“

> ~ Socrates

I wonder if we will have a generation without these doomsayers. Every generation there are claims that everything is falling apart and not as good as it used to be. Literally every single one. Yet society continues to advance and move forward regardless. Maybe it’s time we start ignoring the curmudgeons despite their successes when they were still relevant.

The kids are alright. Yes, even the ones who use TikTok.


I don't particularly agree with Mr. Blow, but to dismiss wisdom is folly.


If it's truly wisdom and not just out of touch crotchety people whining about the next generation, society would have actually collapsed by now.


I find often there is a hint of both. People are people.


I remember Jonathan Blow all the way back when the Indie Game movie came out. Shame he hasn't matured since then - or learnt how to actually formulate an argument.


Yep. He obviously touches on interesting concepts and I feel like he has decent things to say. But this is just way too disjointed for me to get much out of it


And if you decide to take it upon yourself to do something about it.. John's good friend Casey can give you the blackpill: https://www.twitch.tv/handmade_hero/clip/GrossResilientYakin...


there's a few religious/historical assertions in there (this piece, not the talk itself, which I recommend without reservation) that are pretty contentious for various thorny reasons but are kind of just presented off-handedly as fact.


Jonathan Blow made a couple of neat video games.

He also tweeted this while wrapping up development on The Witness. https://web.archive.org/web/20160318053101/https://twitter.c...

Whatever the actual contents of the bottle, its intended interpretation is obvious, and it features in the secret ending video to that game.

He's a clever designer, but I'm not going to accuse him of being a well-grounded individual.


this "article" was not written by Jonathan Blow. the "pee jug" is a prop in the FMV secret ending of The Witness like you said. the tweet with the "pee jug" photo seems to have been made kind of off-handedly as a joke, given that nobody would know about the FMV secret ending of the game at release. (the joke being that he really did pee in a jug during development of the game.)

none of this has to do with the "article"'s author making offhandedly controversial to the point of being offensive assertions about the history of the Jewish people.


> none of this has to do with the "article"'s author making offhandedly controversial to the point of being offensive assertions about the history of the Jewish people.

You mean the part where he briefly paraphrases Wikipedia on the consensus view of modern scholarship on the historicity of the Exodus narrative, namely that there isn't much?

It's not even like he brings it up to make some point about Judaism, just outgrowing his own religious upbringing which also involved the Exodus narrative.

I get that dismissing the Exodus narrative as historically false can fit into a wider project to deny the persecution of Jews more generally, and that the denial of religious foundation myths can sometimes be tied into genocidal or colonialist projects of other kinds.

But that is so obviously not what is going on in the OP, and the general expectation should never be that anyone else takes one's religion's narratives as fact, or even anything close to it.


With no references, not even hyperlinks to most of the Wikipedia articles that are mentioned in the text..


Agreed. The gist seems...rambling and unfocused. Other than "collapse" being very zeitgeist-y, I'm not sure what people are finding in this gist.


I think people are reading the title of the submission and assuming it's a link to the actual talk (which has been around for awhile and people like it), instead of a gist written by someone else entirely, and upvoting accordingly


I quite like the direction that modern software is taking, but I also think there are endemic quality issues. Some of these issues are pretty bad, but I also think that things are improving.

Big goals need big tools. Can’t be helped. Lots of layers, moving parts, infrastructure, and complexity.

I feel that we are still in adolescence, in handling this stuff. I think that there will be some reckonings, and am confident that we’ll stabilize.

I don’t have all the answers. I do have a few, for my small corner, but I’m not sure they would scale particularly well.


[flagged]


We already live in a surveillance hellhole so I welcome open season on billionaires.

Also, if you make a habit of hostile edits like that you're going to be banned.


Legitimate question— is the guy’s name really Jo Blow, or is that a joke?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: