The actual modern Ada ended up being reasonable and the conservative design is justified. And it's no more verbose than Java anyway. Is there any writeup on the discrepance between the hysteria and the real thing?
I think it was blowback to the hype. Rust, sadly, seem to be bound to end likewise as the rustaceans enthusiastically endorse AI-driven rewriting of legacy DoD C code into Rust. What can go wrong there?
That's damning with faint praise. Java is one of the most verbose languages.
> Rust, sadly, seem to be bound to end likewise as the rustaceans enthusiastically endorse AI-driven rewriting of legacy DoD C code into Rust.
Sure, some people want to speed up and/or automate conversion C to Rust, they are a minority. Going for 1 to 1 translation is going to be problem fraught with issues.
I love verbosity more than programming with hieroglyph languages as so common in UNIX culture, need to save those keyboards longevity, a side effect from subpar IDE tooling.
Microsoft is actually doing this as well, as part of their in-house migration to Rust, see RustNation UK 2025 talks from Microsoft.
I wish the author had provided a link to the full reviews - I suspect they were more substantial.
As a aside : let's thank the FSM that Dijkstra never had access to social media - I suspect he had the kind of "abrasive" personality that would have made him probe to wasting his time and intellect arguing with all the randos of the world.
"I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras." — Alan Kay
Just a quote from a review of the GREEN proposal:
> They first explain the introduction of new types with "type weight = integer; type length = integer" which "define the new types "weight" and "length" as different types, both distinct from the predefined type "integer" although they have the properties of "integer"." That is a very obscure sentence: if the new types have all the properties of the old type, how can they differ?
Why, in the same way that two equal (sorry, congruent) triangles are still two different triangles. Hopefully this is not a novel concept for someone with background in maths and theoretical physics?
And this attitude of his where you can't quite pinpoint if he honestly couldn't understand something actually quite plain and simple, or just pretended not to and nitpicks, runs through almost all of his writing. The effect is that he either comes off as dim while pretentious, or just as an asshole.
And his remarks vis. statement terminators vs. separators, oh god. First of all, we sort of do have explicit statement initiators: the "function/block body begins here" token plus separators/terminators are exactly this, if you look at them from the right angle. Second, separators are just annoying for mechanistic code changes: if you switch two statements' places, but one of the was the last one in the block, you will need to also shuffle that pesky semicolon; and code generation also harder since you need to explicitly bother to not emit the separator after the last statement. And if Dijkstra has no problem with that, well, most other programmers are not him, and they do have problem with that. Not that he ever had much respect for tastes and opinions that differed from his own.
> if you switch two statements' places, but one of the was the last one in the block, you will need to also shuffle that pesky semicolon...
Blarg, zarg, arg.
->
Arg, blarg. zarg,
All I want to do is switch two words places. But the pesky cap, commas and period have to be rearranged? How pretentious!
> ...you will need to also shuffle that pesky semicolon; and code generation also harder since you need to explicitly bother to not emit the separator after the last statement
A search for "Dijkstra on statement separators" returned a paper on programming languages and execution determinacy for mapping of concurrent and sequential programs
Based on this research, it's clear that Dijkstra was not troubled by the pesky syntax of statement delimiters during code translation as he was whether you would get the same program function after translation.
The papers conclusion spells this out:
//Having worked mainly with hardly self-checking hardware, with which non-reproducing behaviour of user programs is a very strong indication of a machine malfunctioning, I had to overcome a considerable mental resistance, before I found myself willing to consider non-deterministic programs seriously. It is, however, fair to say that I could never have discovered the calculus before having taken that hurdle: the simplicity and elegance of the above would have been destroyed by requiring the derivation of deterministic programs only. Whether non-determinacy is eventually removed mechanically —in order not too mislead the maintenance engineer— or (perhaps only partly) by the programmer himself because, at second thought, he does care —e.g. for reasons of efficiency— which alternative is chosen, is something I leave entirely to the circumstances. In any case we can appreciate the non-deterministic program as a helpful steppingstone.//
—
Regarding Alan Kay's quote:
His quip about nano-Dijkstras can be read as much as a statement of respect as derision.
For example, Kay's presentations occasionally repeat a rant about the hazard of semaphores for handling concurrency due to problem of deadlock. Dijkstra is known for his pioneering research on semaphores as synchronization primitives. Kay's point is well taken in a very limited context of struggles with early concurrent programming. But in the 21st century, Kay's complaint is like the Wirth's historical complaint about goto. It's true, but who thinks like that anymore?
And there's a widely held precept the abstraction is key to understanding and overcoming these complaints: you should not try to persevere writing grand plans using very low-level abstraction when you can otherwise build higher level, more appropriate abstractions from such primitives. Just as branch instructions comprise arbitrarily higher orders program logic, so do semaphores comprise higher order constructs for synchronization.
Everything about Alan Kay shows he very well understands abstraction, and he studied to be a mathematician, so Dijkstra's formalism is familiar to him. Well, a great jazz riff always has some off-notes.
So prefer to read the Kay quote as pithy, not derisory.
Semaphores are woefully too low-level mechanisms of synchronization; you have to be Dijkstra to always use them correctly. Most programmers aren't; Dijkstra's solution to that was "poor mathematicians should stay pure mathematicians", and he lamented that "the computer science current goal seems to be 'how to help programmers write programs even if they can't actually program'" (paraphrased). Well, guess what: he was wrong. Empirically.
And I know the quote's context, thank you. You may too if you want to: https://news.ycombinator.com/item?id=11799963 . So I guess I'll follow his lead: Dijkstra may have very good ideas about some technical particulars but his opinions on how programming as a discipline should be approached was simply wrong, and we can and should ignore them.
I have it in my mind that Wirth floated modula/modula-2 variants into the early round of Ada candidates but I can find nothing evident.
My back reasoning to it's truth is threefold:
1) He did a residency at York university relating to pascal and modula in teaching CS. That's why my first uni language was pascal. (He'd just left)
2) York used pascal and modula heavily across the Ada specification window
3) York got an SERC or other funding contract to implement a multi pass Ada compiler on BSD Unix.
Which would mean (if true) Dijkstra's comments basically slated all of the candidate languages, and by implication Wirths language views, given he'd worked on the IFIP programming language specification process and was so strongly associated with pascal variants of imperative programming languages.
Modula-2 is a much later language, from 1982, 3 years after the publication of Ada, and this is the version of Modula that has become well known. The first version of Modula has seen very little use, if any.
The first version of Modula has been conceived by Wirth while he took a sabbatical year during which he stayed at Xerox PARC. There he was impressed by the language Mesa, which already had modules.
In my opinion, Mesa was a programming language vastly better than any Modula version. Wirth however thought that Mesa was a too big language, so he attempted to design a simple language that offered some of the benefits of Mesa.
In 1976, when Wirth has designed the first (not very good) version of Modula, the DoD requirements specification process that has resulted in Ada had already passed through 3 stages: STRAWMAN (1975-04), WOODENMAN (1975-08) and TINMAN (1976-01), though the first requirements document that was close to the final language, IRONMAN, would be published only next year, in 1977. (The final DoD requirements document was STEELMAN, in 1978-06, and Ada was published one year later, in 1979-06.)
Therefore Modula had no influence on Ada.
After implementing the detailed DoD requirements (which included some influences from the language Jovial, which was used within the DoD at that time), the remainder of Ada was influenced by IBM PL/I, by Algol 68, by Pascal from Wirth, and also by Xerox Mesa, the source of inspiration for Modula. Any resemblance between Ada and Modula comes either from Pascal or from Mesa, which have been sources of inspiration for both Ada and Modula.
Xerox Mesa has been a very innovative programming language, but it is seldom mentioned today, despite being the origin of many features added to later languages, e.g. the less usual loop syntax used by Python (with ELSE; Mesa used a better keyword, FINISHED).
Unfortunately some of the best features of Mesa are not supported at all or only badly supported in most modern languages.
And the foundation of one of the very first memory safe systems programming languages, with automatic resource management, using a mix of reference counting with a cycle collector, Cedar.
Mesa also provided the very first IDE experience for strongly typed languages, with code completion, typo correction, dynamic code loading, a REPL, incremental compilation,...
There were many useful details in the syntax of various statements and expressions, which could simplify programs.
An important feature was that Mesa was one of the few programming languages that specified a GOTO instruction in the right way.
Mesa had a restricted GOTO (following suggestions from Knuth), which could jump only forwards and only out of a block.
This eliminates all problems that can be caused by GOTO, while retaining all the benefits.
All later languages are either languages that provide only an inadequate handling of errors and exceptional conditions, or languages that provide GOTO instructions that are hidden behind different mnemonics that do not use the words GO TO.
For instance, if a language has labeled loops and some kind of exit-loop instruction that contains a loop label, to be able to exit from multiple nested loops, that instruction is just an ordinary GOTO instruction, which uses a different mnemonic, because the mnemonic GOTO is "harmful", and where the jump label is located in a wrong place, which makes more difficult to follow the control flow when reading the program.
In general, in Mesa the syntax for any kind of block or iteration could specify in a simple way both a normal exit and several exits corresponding to various kinds of errors or exceptional conditions.
The syntax used by Mesa for this was more convenient than the use of exceptions in most modern programming languages and the implementation was much more efficient, because the kind of exceptions used in modern languages is designed only for returning from several levels of nested functions that have been compiled separately. For exiting one or more levels of nested blocks inside a function much more efficient implementations are possible.
Mesa also had the kind of exceptions used in modern languages, but their use was needed much less frequently.
Thank you for the tip. I'm feeling pretty stupid having never heard of (or not remembering) Xerox Mesa. (I'm just old enough to have briefly used Xerox Star.)
> ...restricted GOTO, which could jump only forwards and only out of a block.
I guess that's roughly how I imagined how GOTO / "better" exception handling should work. I've long wanted (wave arms) Visual Basic style error handling, but "inline" (not off to the side). I probably unwittingly gleaned the notion from Mesa.
I'm now foraging for Mesa stuff. Here's two quick hits.
The history of exceptions is discussed in [1]. I am not a historian, but think that Milner's ML was the first language with a type-safe exception mechanism. [2] Discusses, among many other things, Lisp's relation with exceptions.
> Haskell, though not perfect, is of a quality that is several orders of magnitude higher than Java.
I'm pretty sure he would assess Python a few orders of magnitude lower than Java. Probably warranting the need for a log scale for the quality of programming languages.
I need more context, I do not understand anything apart from the quote on the bottom, which is just a mere expression of dislike of something without any reasons provided...
> Ada was such a mess that I shuddered at the thought that Western security would depend on it and that I would feel much safer if the Red Army were to adopt it as well.
Replace "Ada" with "Rust" or anything else (like Dijkstra's favorite programming language). It does not explain the why.
There is no technical substance in the quotes though. You could use them against whatever language you don’t like. I’m sure they are taken out of context, but I find it weird how people are so impressed by these generic insults.
I think it was blowback to the hype. Rust, sadly, seem to be bound to end likewise as the rustaceans enthusiastically endorse AI-driven rewriting of legacy DoD C code into Rust. What can go wrong there?