Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

all of this just reads like the supposed UML zeitgeist that was supposed to transform java and eliminate development 20 years ago

if this is all ultimately java but with even more steps, its a sign im definitely getting old. it’s just the same pattern of non technical people deceiving themselves into believing they dont need to be technical to build tech and then ultimately resulting in again 10-20 years of re-learning the painful lessons of that.

let me off this train too im tired already



> all of this just reads like the supposed UML zeitgeist that was supposed to transform java and eliminate development 20 years ago

See also 'no-code', 4GLs, 5GLs, etc etc etc. Every decade or so, the marketers find a new thing that will destroy programming forever.


20 years before UML/Java it was "4th Generation Languages" that were going to bring "Application Development Without Programmers" to businesses.

https://en.wikipedia.org/wiki/Fourth-generation_programming_...


And before that it was high-level programming languages, or as we call them today, programming languages.


The 4GL was mostly reporting languages, as I remember. Useful ones, too. I still feel we haven't been even close to utilizing specialized programming languages and toolkits.

Put another way, I am certain that Unity has done more to get non-programmers to develop software than ChatGPT ever will.


I'd argue first prize for that goes to Excel (for a sufficiently broad definition of "develop software").


The mistake was going after programmers, instead of going after programming languages, where the actual problem is.

UML may be ugly and in need of streamlining, but the idea of building software by creating and manipulating artifacts at the same conceptual level we are thinking at any given moment, is sound. Alas, we've long ago hit a wall in how much cross-cutting complexity we can stuff into the same piece of plaintext code, and we've been painfully scraping along the Pareto frontier ever since, vacillating between large and small functions and wasting time debating merits of sum types in lieu of exception handling, hoping that if we throw more CS PhDs into category theory blender, they'll eventually come up with some heavy duty super-mapping super monad that'll save us all.

(I wrote a lot on it in in the past here; c.f. "pareto frontier" and "plaintext single source of truth codebase".)

Unfortunately, it may be too late to fix it properly. Yes, LLMs are getting good enough to just translate between different perspectives/concerns on the fly, and doing the dirty work on the raw codebase for us. But they're also getting good enough that managers and non-technical people may finally get what they always wanted: building tech without being technical. For the first time ever, that goal is absolutely becoming realistic, and already possible in the small - that's what the whole "vibe coding" thing heralds.


I’ve heard this many times before but I’ve never heard an argument that rebukes the plain fact that text is extremely expressive, and basically anything else we try to replace it with less so. And it happens that making a von Neumann machine do precisely what you want requires a high level of precision. Happy to understand otherwise!


The text alone isn't the problem. It's the sum of:

1) Plaintext representation, that is

2) a single source of truth,

3) which we always work on directly.

We're hitting hard against limits of 1), but that's because we insist on 2) and 3).

Limits of plaintext stop being a problem if we relax either 2) or 3). We need to be able to operate on the same underlying code ("single source of truth") indirectly through task-specific view, that hide the irrelevant and emphasize the important for the task at hand, which is something that typically changes multiple times a day, sometimes multiple times an hour, for each programmer. The views/perspectives themselves can be plaintext or not, depending on what makes most sense; the underlying "single source of truth" does not have to be, because you're not supposed to be looking at it in the first place (beyond exceptional situations, similar to when you'd be looking at the object code produced by the compiler).

Expressiveness is a feature, but the more you try to express in fixed space, the harder it becomes to comprehend it. The solution is to stop trying to express everything all at once!

N.b. makes me think of a recent exchange I had on HN; people point out that code is like a blueprint in civil engineering/construction - but then, in those fields there is never a single common blueprint being worked on. You have different documents for overall structure, different for material composition, hydrological studies, load analysis, plumbing, HVAC, electrical routing, etc. etd. Multiple perspectives on the same artifacts. You don't see them merge all that into a single "uber blueprint", which would be the equivalent of how software engineers work with code.


How so? Even just hypertext is more expressive than plain text. So is JSON, or any other data format or programming language which has a string type for that matter.


Those are all still text.


Yes, structured text is a subset of text. That doesn't negate the point made.


Of all the things I read at uni UML is the thing I've felt the least use for - even when designing new systems. I've had more use for things I never thought I'd need like Rayleigh scattering and processor design.


I think most software engineers need to draw a class diagram from time to time. Maybe there are a lot of unnecessary details to the UML spec, but it certainly doesn't hurt to agree that a hollow triangle for the arrow head means parent/child while a normal arrow head means composition, with a diamond at the root for ownership.

As the sibling comment says, sequence diagrams are often useful too. I've used them a few times for illustrating messages between threads, and for showing the relationship between async tasks in structured concurrency. Again, maybe there are murky corners to UML sequence diagrams that are rarely needed, but the broad idea is very helpful.


True but I don't bother with a unified system, just a mermaid diagram. I work in web though, so perhaps if I went back to embedded (which I did only a short while) or something else when a project is planned in it entirety rather than growing organically/reacting to customers needs/trends/the whims of management.


I just looked at Mermaid and it seems to as close to UML as I meant by my previous comment. Just look at this class diagram [1]: triangle-ended arrows for parent/child, the classic UML class box of name/attributes/methods, stereotypes in <<double angle brackets>>, etc. The text even mentions UML. I'm not a JS dev so tend to use PlantUML instead - which is also UML based, as the name implies.

I'm not sure what you mean by "unified system". If you mean some sort of giant data store of design/architecture where different diagrams are linked to each other, then I'm certainly NOT advocating that. "Archimate experience" is basically a red flag against both a person and the organisation they work for IMO.

(I once briefly contracted for a large company and bumped into a "software architect" in a kitchenette one day. What's your software development background, I asked him. He said: oh no, I can't code. D-: He spent all day fussing with diagrams that surely would be ignored by anyone doing the actual work.)

[1] https://mermaid.js.org/syntax/classDiagram.html


The "unified" UML system is referring to things like Rose (also mentioned indirectly several more comments up) where they'd reflect into code and auto-build diagrams and also auto-build/auto-update code from diagrams.


I've been at this 16 years. I've seen one planned project in that 16 years that stuck anywhere near the initial plan. They always grow with the whims of someone.


> I think most software engineers need to draw a class diagram from time to time.

Sounds a lot like RegEx to me: if you use something often then obviously learn it but if you need it maybe a dozen or two dozen times per year, then perhaps there’s less need to do a deep dive outside of personal interest.


UML was a buzzword, but a sequence diagram can sometimes replace a few hundred words of dry text. People think best in 2d.


Sure, but you're talking "mildly useful", rather than "replaced programmers 30 years ago, programmers don't exist anymore".

(Also, I'm _fairly_ sure that sequence diagrams didn't originate with UML; it just adopted them.)


>People think best in 2d.

no they don't. some people do. Some people think best in sentences, paragraphs, and sections of structured text. Diagrams mean next to nothing to me.

Some graphs, as in representations of actual mathematical graphs, do have meaning though. If a graph is really the best data structure to describe a particular problem space.

on edit: added in "representations of" as I worried people might misunderstand.


FWIW, you're likely right here; not everyone is a visual thinker.

Still, what both you and GP should be able to agree on, is that code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.

It's dumb that we're still stuck with this paradigm; it's a great lead anchor chained to our ankles, preventing us from being able to handle complexity better.


> code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.

It depends on the language. In my experience, well-written Lisp with judicious macros can come close to fitting the way I think of a problem. But some language with tons of boilerplate? No, not at all.


As a die-hard Lisper, I still disagree. Yes, Lisp can go further than anything else to eliminate boilerplate, but you're still locked in a single representation. The moment you switch your task into something else - especially something that actually cares about the boilerplate you hidden, and not the logic you exposed - and now you're fighting an even harder battle.

That's what I mean by Pareto frontier: the choices made by various current-generation languages and coding methodologies (including choices you as a macro author makes, too), are all promoting readability for some tasks, at the expense of readability for other tasks. We're just shifting the difficulty around the time of day, not actually eliminating it.

To break through that and actually make progress, we need to embrace working in different, problem-specific views, instead of on the underlying shared single-source-of-truth plaintext code directly.


IMHO there's usually a lot of necessary complexity that is irrelevant to the actual problem; logging, observability, error handling, authn/authz, secret management, adapting data to interfaces for passing to other services, etc.

Diagrams and pseudocode allow to push those inconveniences into the background and focus on flows that matter.


Precisely that. As you say, this complexity is both necessary and irrelevant to the actual problem.

Now, I claim that the main thing that's stopping advancement in our field is that we're making a choice up front on what is relevant and what's not.

The "actual problem" changes from programmer to programmer, and from hour to the next. In the morning, I might be tweaking the business logic; at noon, I might be debugging some bug across the abstraction layers; in the afternoon, I might be reworking the error handling across the module, and just as I leave for the day, I might need to spend 30 minutes discussing architecture issue with the team. All those things demand completely different perspectives; for each, different things are relevant and different are just noise. But right now, we're stuck looking at the same artifact (the plaintext code base), and trying to make every possible thing readable simultaneously to at least some degree.

I claim this is a wrong approach that's been keeping us stuck for too long now.


I'd love this to be possible. We're analyzing projections from the solution space to the understandability plane when discussing systems - but going the other way, from all existing projections to the solution space, is what we do when we actually build software. If you're saying you want to synthesize systems from projections, LLMs are the closest thing we've got and... it maybe sometimes works.


Yeah, LLMs seem like they'll allow us to side-step the difficult parts by synthesizing projections instead of maintaining them. I.e. instead of having a well-defined way to go back and forth between a specific view and underlying code (e.g. "all the methods in all the classes in this module, as a database", or "this code, but with error handling elided", or "this code, but only with types and error handling", or "how components link together, as a graph", etc.), we can just tell LLMs to synthesize the views, and apply changes we make in them to the underlying code, and expect that to mostly work - even today.

It's just hell of an expensive way to get around doing it. But then maybe at least a real demonstration will convince people of the utility and need of doing it properly.

But then, by that time, LLMs will take over all software development anyway, making this topic moot.


ok, but my reference to sentences, paragraphs and sections would not indicate code but rather documentation.


oops, evidently I got downvoted because I don't think best in 2d and that is bad, classy as always HN.


Lmao I remember uni teaching me UML. Right before I dropped out after a year because fuck all of that. It's a shame because some of the final year content I probably would've liked.

But I just couldn't handle it when I got into like COMP102 and in the first lecture, the lecturer is all "has anybody not used the internet before?"

I spent my childhood doing the stuff so I just had to bail. I'm sure others would find it rewarding (particularly those that were in my classes because 'a computer job is a good job for money').




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: