To be fair, if you learn computer science well enough to thoroughly understand Scheme, I don't think it'll take more than a few weeks during the summer to learn Python.
I disagree. You can learn the language itself pretty quickly. Finding your way through the expansive standard library will take longer. Getting a good handle on the package ecosystem is a lifetime learning project.
Knowing your way around the ecosystem of one programming language does not build up the intuition necessary for identifying O(n²) (or worse!) algorithms and choosing/writing O(log(n)) (or better!) ones instead.
Computer Science has little to do with science, but what it teaches you is certainly closer to science than just building a huge mental index for a bunch of work done by other people.
There's certainly value in that skill, but it has no place in a Computer Science curriculum.
This would be like taking Astrophysics students and telling them to study the details of all of the different kinds of telescopes they can buy.
That's not really what we were talking about here. The context was that compsci students want to learn marketable skills, and the claim was that if the student had learned Scheme in class, they could quickly pick up Python. And that's true, for the language itself. But knowing only the language, like never importing code you hadn't written yourself, doesn't go far along the road to marketability.
I'm not arguing that compsci should be job training, not at all. My disagreement is solely with this specific claim.
But FWIW, while I understand your analogy, an astrophysics department that didn't tell the students that there are these things called telescopes, and here's why you might use one over the other for various situations, and that they're how you're going to get the observation data you'll test your theories against, would be doing a disservice.
> Knowing your way around the ecosystem of one programming language does not build up the intuition necessary for identifying O(n²) (or worse!) algorithms and choosing/writing O(log(n)) (or better!) ones instead.
I'll disagree with this, at least in terms of Scheme versus Python.
Python is visually close enough to other languages that the skills you develop to quickly see O(n²) algorithms easily transfer to many other languages. Scheme is very different visually, and so the intuition doesn't transfer as well. Sure, it's possible your intuition is wrong, but when scanning a program intuition can help in the first pass.
Scheme has (trace), at least most interpreters (and his cousing Common Lisp) have some tracing and pretty-printing features. Far more powerful than anything Python could offer.
Oh, and of course it has functions like sdraw or draw-cons-tree when you can print the contents of a list in seconds as an ASCII-ART chart:
Python has the pprint module, which takes care of this for you, and for more datatypes than are done here. (I don't see how this would handle a hashmap in a sensible way.)
But I'm not sure how this addresses what I was saying, which is that the intuitions about algorithms you get working on Python are easier to transfer to popular languages like C++, Java, Javascript, Rust, etc..
This depth you are never going to get in a college education anyway. Especially not since programming isn't (and shouldn't be) the only thing you learn in a Software Engineering / Computer Science bachelor.
One of the big shifts in academia over the past couple decades is that, for any number of reasons, students today are less likely to self-study or tinker outside of classes and internships. The increased prevalence of basic bootcamp-style classes like "Let's Build a Rails App" in CS programs is because departments can no longer assume that students will explore things like that in their spare time.
What good does that do, though? Make it harder to tell the intrinsically motivated students from the “I’m just here to get a job when I graduate”? It seems like it harms the former.
Is that what we need from universities? Is that helping employers? Helping strong or intermediate students?
It's what universities have become. They are expensive, grandiose trade schools operating out of very distinguished-looking Collegiate Gothic designed buildings.
Modern programming is. But look at assembly, which is what actual machines do. And you can write assembly with tons of gotos and not a single function in sight.
There's really no such thing as a 'function' at the assembler level. There are pure jumps and there are jumps that also perform side effects on a register conventionally called a 'stack pointer' (and on the memory that register points to) but no true 'functions.'
This is roughly the point Steele is making, in addition to his point that the stack pointer side effects are often unnecessary (i.e. tail jumps).
Implementing a stack pointer in hardware is merely a legacy hack: Before people figured out continuations they didn't know any better.
If you look at the computer (registers, cache, RAM, storage, etc) as one whole, every assembler instruction is just a function that takes in the state of the computer and returns a new state. Multiple assembler instructions is then using function composition.
Modern machines do. But the dominance of imperative assembly is more about historical momentum and engineering tradeoffs than fundamental necessity. We don't have to go all-in on Von Neumann.
It seems to me that a large percentage of jobs exist just to exist, and that they use their continued existence to justify their continued existence. I wonder how much the world would keep spinning if 90% of people were laid off. Maybe we'll find out if AI is adopted widely enough...
To be fair, the Monaco team did an amazing thing. It's not clear to me just how much of VS Code's complexity is essential to its genius, but if anyone ever creates a slim version that does 90% of the work in 10% of the code, it would forever change editing like VS Code did. It would be great if it was portable too, but it's hard to get that without pulling in HTML + JS + CSS as dependencies. Maybe as a Dear ImGui extension?
VS Code's complexity is due in large part to its extensibility. It has the biggest, most robust extension API of any modern editor. Extensions don't get to run on the main/UI thread but run in a separate process that communicates with the main window over RPCs. This necessitates a lot of plumbing and layered generics but makes the main UI fast/stable and was a key innovation over other editors at the time (cough Atom cough).
The API is so good that a lot of core VS Code behavior (e.g. Github integration, support for lots of languages) is implemented in the form of built-in extensions.
It is possible to get 80% of VS Code's functionality with 10-20% of the code if you just bake everything into one monolith, but this has been tried repeatedly and it keeps failing in part because the extension ecosystem and attendant network effects form a wide moat.
Just for fun I reformatted it minimally in the conservative way I write code that is intended to be easy to read and understand to improve the odds of future contributors (or future me) introducing a bug in it due to misunderstanding it.
It's painfully verbose but I think it's worth it considering that we're in 2025 and we're not limited to one character variable names.
Sorry but you didn't just reformatted it, you added new variables and return statements that were not in the original code (and even introduced bugs like row 41).
As of short variable names, I'd argue that they are actually more readable than long ones when they're the iterator of a loop:
while... XNextEvent(... &e)
What else can "e" stand for in the body of this loop?
Longer lifetimes and not-as-obvious scopes do deserve longer names.
Finally, I strongly dislike this kind of reversed conditions:
if (const == var)
To paraphrase your own words, we're in 2025 and we should not be limited by our fear of forgetting one "=".
Oh no, I accidentally duplicated a function call while quickly rewriting the code for fun, what a terrible sin! (as a side note, I didn't even compile the code but I after quick check I don't see any notes on the man page that suggest that the accidental spurious repeated call could cause a problem)
I understand that you might not like that style but I think you're comming a bit too strong on this, especially considering how carefully I worded my message in terms of not hurting any feeling and being clear about this being MY preference.
I guess I shouldn't probably even answer but it saddens me and makes me a bit angry to get a reminder of why I don't usually participate in social media.
It was not my intention to make you sad or angry. I assure you that I have nothing against who uses a different coding style than mine. But consider that, just like you, I too have the right to tell the world what my preference is. Should I have softened my post with more "IMHO", "I feel", "my taste"? Probably, and I apologize.
Please don't stop participating in social media just because a halfwit like me hasn't agree with you once!
You know it's not about the disagreement, it's about that toxic need to be a smart-ass that's so prevalent in social media (including this platform).
I would've loved to engage in a conversation about coding styles such as MISRA, CERTs or even the small tweaks such as the one that offended you so deeply but you wasted the opportunity to engage in a constructive convesartion and instead chose to nitpick the "reformatting" thing (seriously, I just moved things around slightly, it's the same thing) or the nothing-burger "bug"...
I don't know, maybe I misinterpreted you and if that's the case then I apologize but when I saw that code and felt the urge to play/doodle I thought it could be a fun way to connect with someone over something silly and I just got disappointed.
It's also 50 bytes longer than the original. More LOC only because my Vim formats on save.
Whenever somebody comes up with some big brain idea with macros, ORMs, DSLs, 180 IQ templates, language extensions that even Haskell nerds would say are too much, there's a good chance that the grugbrained version is just as readable, just as concise without going against the language.
I'm this close to go completely nuts with this industry and commit to full butlerian jihad against anybody who goes higher in abstraction than ANSI C.
It works here so well because it's limited to 20 lines and each macro does exactly what it needs to for the problem at hand.
Take that DSL and use it over a year to write a bunch of code to do normal things as your app grows into its problem domain and spills over into a few more, and it melts. New developers will show up to onboard to your and be like "WTF is this 'on()' thing I'm looking at all over the place, and why isn't it used over here?!". Some enterprising developer will introduce "map2()" to indirect based on keysym and not keycode, etc...
Domain Specific Languages are a mistake, almost every time they're used. And the only exceptions are the ones that grow into first class languages for well-defined problem areas (I'm thinking about things like VHDL or Mathematica here), and even there they tend not to be that much better than well-crafted domain-specific APIs in true programming languages (think numpy, pytorch, et. al.)
there really isn't a fundamental difference between DSLs and libraries for the points that you brought up. where it really starts to get sketchy is when you do really funny things with the base syntax (looking at you lisp and rust). if not well thought out they can be fragile, confusing, and a real burden for new contributors.
I guess here's a question - do you consider regex libraries to be DSLs?
Interestingly I'd say a regular expression is absolutely not a DSL. It's sort of the opposite. A DSL is a tightly crafted interface for the configuration needed to support solutions to one somewhat unique problem. And the "trap" above is that the fact that the problem area is narrow means that only experts will really understand the DSL.
A regex is a tool implementing a solution (albeit a tightly crafted one) to an extremely broad set of problem areas. I mean, sure, in some sense it's a "DSL for string matching", but that doesn't really capture the essence of the DSL trap mentioned above. I mean, almost everyone needs string matching. So there's no trap: basically everyone knows regex syntax, even if they like to complain about it.
personally, I'd only consider a DSL to be good and useful if it can be implemented in different ways/languages, etc... It's not good if a DSL is language specific leaky abstraction.
Yeah exactly, this is why I stopped liking DSLs about 15 years ago, shortly after using Ruby extensively (probably not a coincidence) and converting to Clojure, where there was a large movement away from macros despite it being lisp. They're good in very isolated situations, and only when designed very carefully. This wm is quite possibly one of them; if you need more complexity than the macros here allow, and adding/changing macros only makes it worse, just use another wm.
> Most software today is crappy. Do you really need all the bells and whistles? Probably not.
I agree that most software today is bloated, but I wouldn't say crappy. There are legitimate reasons to choose bloat, for example using SDL or Electron to speed up development and have easier portability. But for some reason I do strongly enjoy writing and using minimalist software. That's why I removed C++, SDL and other libs from my app (hram.dev) and just used C, native Win32 APIs, and D3D, getting it down to 1.4mb and speeding up compilation a lot. So projects like this always appeal to me, and I love seeing different ways we can be minimalist without sacrificing too much functionality or convenience.
The best apps I've used have implementations for every OS and UI separately. Usually, everyone uses the easier route, but it will only be good enough, not the best. But again, now your app works only on Windows.
Even better if the library code is properly written, not only you can have multiple GUI frontends, you can make the CLI folks equally happy, and most of the code remains portable.
Naturally nowadays this is too much to ask for, so many ship the Chrome Application Platform instead.
Funny enough, I came across WiX the other day when I was looking into windows installers like msix, nsis, etc. Eventually settled on self-contained exe (and it's only 1.4 mb, woo!) but seeing the name wix took me back, I vaguely remembered it from around 2005 or so when I was first trying to make "real" windows programs (as opposed to visual basic ones). Took 20 years, but I finally did it, and written entirely in C, too! Anyway yeah, different wix than the popular one. Tom, you may want to rename this post.
I did something similar ones as a vs code extension for love2d which I called live2d, so that you could develop your game while it was running. It was incredibly fun, but incredibly hard to use, and nobody had any interest in it, so I abandoned it and lost the source code eventually. But it worked so well. I had invented this thing called magic tables to go along with it and make it easier to use, I forgot what they did, I just remember the name and how cool it was.
The readme just looks like it was written by AI, and the initial commit is the entire code. It just seems like AI was used at least somewhat. Just curious how much, if at all.
I submitted a link to a project[0] with similar characteristics a few days ago, before I realized it was probably entirely made by AI. That's what got me wondering this in the first place.