FWIW, you're likely right here; not everyone is a visual thinker.
Still, what both you and GP should be able to agree on, is that code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.
It's dumb that we're still stuck with this paradigm; it's a great lead anchor chained to our ankles, preventing us from being able to handle complexity better.
> code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.
It depends on the language. In my experience, well-written Lisp with judicious macros can come close to fitting the way I think of a problem. But some language with tons of boilerplate? No, not at all.
As a die-hard Lisper, I still disagree. Yes, Lisp can go further than anything else to eliminate boilerplate, but you're still locked in a single representation. The moment you switch your task into something else - especially something that actually cares about the boilerplate you hidden, and not the logic you exposed - and now you're fighting an even harder battle.
That's what I mean by Pareto frontier: the choices made by various current-generation languages and coding methodologies (including choices you as a macro author makes, too), are all promoting readability for some tasks, at the expense of readability for other tasks. We're just shifting the difficulty around the time of day, not actually eliminating it.
To break through that and actually make progress, we need to embrace working in different, problem-specific views, instead of on the underlying shared single-source-of-truth plaintext code directly.
IMHO there's usually a lot of necessary complexity that is irrelevant to the actual problem; logging, observability, error handling, authn/authz, secret management, adapting data to interfaces for passing to other services, etc.
Diagrams and pseudocode allow to push those inconveniences into the background and focus on flows that matter.
Precisely that. As you say, this complexity is both necessary and irrelevant to the actual problem.
Now, I claim that the main thing that's stopping advancement in our field is that we're making a choice up front on what is relevant and what's not.
The "actual problem" changes from programmer to programmer, and from hour to the next. In the morning, I might be tweaking the business logic; at noon, I might be debugging some bug across the abstraction layers; in the afternoon, I might be reworking the error handling across the module, and just as I leave for the day, I might need to spend 30 minutes discussing architecture issue with the team. All those things demand completely different perspectives; for each, different things are relevant and different are just noise. But right now, we're stuck looking at the same artifact (the plaintext code base), and trying to make every possible thing readable simultaneously to at least some degree.
I claim this is a wrong approach that's been keeping us stuck for too long now.
I'd love this to be possible. We're analyzing projections from the solution space to the understandability plane when discussing systems - but going the other way, from all existing projections to the solution space, is what we do when we actually build software. If you're saying you want to synthesize systems from projections, LLMs are the closest thing we've got and... it maybe sometimes works.
Yeah, LLMs seem like they'll allow us to side-step the difficult parts by synthesizing projections instead of maintaining them. I.e. instead of having a well-defined way to go back and forth between a specific view and underlying code (e.g. "all the methods in all the classes in this module, as a database", or "this code, but with error handling elided", or "this code, but only with types and error handling", or "how components link together, as a graph", etc.), we can just tell LLMs to synthesize the views, and apply changes we make in them to the underlying code, and expect that to mostly work - even today.
It's just hell of an expensive way to get around doing it. But then maybe at least a real demonstration will convince people of the utility and need of doing it properly.
But then, by that time, LLMs will take over all software development anyway, making this topic moot.
Still, what both you and GP should be able to agree on, is that code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.
It's dumb that we're still stuck with this paradigm; it's a great lead anchor chained to our ankles, preventing us from being able to handle complexity better.