I believe the answer is to have a single model of everything.
You can build the model from whatever: Draw it with a GUI, query your server infrastructure, analyze binaries, collect it from databases, parse it from documentation, process your issues, etc. The important points is to merge all the information into a single model.
You cannot represent this model on a screen in any meaningful way. Instead you can only get certain "views" into it to understand an aspect or answer specific questions.
Does it exist today? Many tools and processes have been developed to achieve it. For example, "Model-based systems engineering" is a very formal approach. I have yet to see it realized in practice though.
I'm pretty sure it isn't a single tool or process though. By definition it integrates with nearly everything and that means a lot of customization. You won't get it get it off the shelf.
This is something I that’s been living in my head for a few years now—and no, it does not exist yet. I’m also convinced the only way to get it right is to have a single graph, modelling the entire system, and apply filters on the graph to get only the nodes you’re currently interested in, then have different diagrams produced from that as the output. How else would you describe the flow of packages from the internet through the firewall, what the logical network looks like, and which physical location things are located at? These questions all interfere with each other on a conceptual level, yet are all conceivable using different attributes of the same connected graph nodes.
It’s very complex, very interesting, and lots of work.
I've had the same thought but then gave up because it's not even a single graph: we assume it's a single graph because some parts of "concept - connection - concept" tuples...but that fails to capture the reality that the aggregate behavior of a system can implement a wholly different behavior.
A practical example of this would be neural networks in AI: which collectively implement functions much greater then the individual links.
I have the same conclusion, if you capture the model as a graph you can then derive from the graph as many visual representations as you need, for instance, this tool can generate archimate context diagrams from the graph data ( mostly by doing "ontology translation" )
https://github.com/zazuko/blueprint
The problem with "a single model of everything" is that it is an inferior tool.
It's a drawing or diagramming tool, but the stand-alone diagramming tools are better.
It's a text composition tool, but word processors are better.
It's a code writing tool, but IDEs are better.
In every single thing it tries to do, a specialized tool is better. So this single tool needs to be close enough to the best in every category in order to not be a boat anchor holding you back. So far, nothing has come close.
Nothing stops us from using specialized tools in this case. They just need to not work on "single source of truth" of their respected domain, but on a projection of a single artifact into relevant dimension. Devil's in the details, of course, but at least with IDEs I can tell with certainty, based on my experience, that sticking to editing directly the single source of truth plaintext codebase is wasting much more time and cognitive effort than IDEs are saving us.
Yes, that is good point. It might be reason why it fails in practice. Whatever view one produces from a single model, it will look crappy and cheap compared to the Powerpoint slide of someone else. It might be more truthful and more up to date, but it isn't as persuasive. Persuasion is what a presentation is ultimately about: It should influence the behavior of the audience.
Not sure, but IMO a dataflow diagram from good ol' SSADM got a lot of the way there. Unlike a lot of the modern techniques which model (groups of) deployed components, DFDs were strictly logical and functional and included data stores and users as well as functional components. So it was possible to model data in flow and at rest, and the characteristics of each, including where it crossed system boundaries.
IMO this was the best diagram format to get an overall view of the system, or reveal which areas needed further analysis.
It sounds to me like what you're describing is a single *tool*, not a single model. Is that possible?
I agree that we need multiple views, but it isn't just a matter of filtering part of a single view - it's also different fundamental primitives that are needed to communicate different kinds of ideas or aspects of a given system.
To me, this seems parallel to the evolution of LLMs into multi-modality by default. Different queries will need different combinations of text, heat maps, highlights, arrows, code snippets, flow charts, architecture diagrams, etc.
It's certainly not easy, but it's exciting to consider :)
> You cannot represent this model on a screen in any meaningful way. Instead you can only get certain "views" into it to understand an aspect or answer specific questions.
There is Capella which is a tool (software) and method to design systems. It used the Arcadia method (mbse). It is quite extensible.
I don't understand how you can write an article about system design without mention to enterprise architect or Capella. I've read this as "we need to come up with a solution that already exists".
Almost all of his complaints stem from using generic drag-and-drop diagramming tools. A modern take on system diagramming, like Ilograph[0], solves most of these issues (IMBO).
There are quite a few tools cropping up trying to solve this problem. Multiplayer.app is one example - they use OTel to gather distributed traces from your system and ensure you automatically get notified when there's drift.