As others have said the C4 model is a great way to address a number of these issues.
I can’t find the right video at the moment but Simon Brown (creator of C4) gives a great talk about creating his DSL, Structurizr, for C4, which he developed during COVID lockdown (if memory serves). There are many videos on YouTube of Simon talking about “C4 Models as Code” so I’m sure any one of those will suffice.
The focus is on creating the model of your system architecture, from which the diagrams you extract are a specific projection of that model. Rather than a diagram be the main artifact. It’s a simple but very powerful concept that I’m always surprised isn’t more widely used.
Structurizr models can also be exported to display as ilograph diagrams, mermaid diagrams and more.
Also very much worth a mention is icepanel, a lovely tool for architectural model that implements the C4 model heavily.
I saw Simon talk at a conference in Sydney about 10-15 years ago and heard about C4 for the first time in that talk, it’s been one of the most influential talks I’ve been to in my career as it made a lot of fuzzy things in my head all start to come together in a way that just made sense.
I’ve never seen modeling tools widely used. When it’s attempted it’s always a top-down initiative that always addresses management concerns, but always creates more risk for developers without generating value for them, and it never takes off outside of being a box on a process list to check off. Developers promptly stop using these tools as soon as their use is no longer enforced.
The long term destiny of models is that they will always diverge from the code, sometimes in ways that matter. In that situation, the best model of the code is the code it’s self because it’s a model that you can rely on. You can certainly update the diagrams, but that’s a priority call vs. what else you’re doing, and it doesn’t always win out over other tasks.
Some enterprising tool developers will make modeling tools that will generate code, therefore linking the diagrams and the model to the development of the code. When this is employed, it’s complex and it’s never a perfect implementation. Generated code can be buggy, or it may not do what you want. It may be a significant effort to make the models generate the code that is needed, and you may find your self further developing the code generation tools to generate the code you want. I’ve never seen this have a good benefit to cost ratio.
The problem with any of these tools is that they solve only one part of the puzzle. Take Structurizr for example, it doesn't automatically create the diagrams for you or notify you when it detects architectural drift (and automatically update the diagram).
Others miss other pieces of the puzzle, such as having a list all your APIs, all your system docs in a single place (ADRs, reqs, etc.), connecting to your repos, etc.
> Take Structurizr for example, it doesn't automatically create the diagrams for you
The Structurizr DSL is designed for manual authoring (which is what most people tend to do), but there's nothing preventing you from writing some code (using one of the many open source Structurizr compatible libraries) to reverse-engineer parts of the software architecture model from source code, binaries, your deployment environment, logs, etc.
> or notify you when it detects architectural drift
If you do the above, there's then nothing preventing you from writing some tests to notify of architectural drift, etc.
Thank you, but by that argument, I could that for any diagramming / whiteboarding tool. The point is having a tool that reduces work for me and does these things automatically.
> Thank you, but by that argument, I could that for any diagramming / whiteboarding tool.
In theory, sure, but the majority of diagramming/whiteboarding tools are not easily manipulated via code/an API. Structurizr is a modelling tool, and the model can be authored by a number of methods ... manual authoring, reverse-engineering, or a hybrid of the two.
> The point is having a tool that reduces work for me and does these things automatically.
I do hope that we will see some tooling that can do these things automatically, but we're not there yet ... fully-automatic (as opposed to semi-automatic) comes with some serious trade-offs.
* Devs: We need a better language than Javascript as the lingua franca of the web!
* Users: Sure, use whatever language you want -- just make sure it compiles to Javascript, which is already the lingua franca of the web.
Keep in mind that the consumers of technical diagrams are often non-technical folks. And they don't care about how they get their diagrams. They just want to be able to understand, at a high level, what's going on in the black box.
You can either convince every single one of them that devs need to focus on better system design tools ... or you can continue to give them the diagrams they want, just using a smarter process to generate them.
Or you can treat them as entirely separate problems, because fundamentally system design tools are building tools, and system diagrams are communication tools. In most cases you can improve them independently.
Exactly. Users don't care about the underlying tech as long as it works and works on the platforms they're using and with other systems they're using. That's it. When users start caring about the underlying tech, you need to actively dissuade them unless there is a sound technical argument. Otherwise you end up with people insisting on using DOS for enterprise applications in 2024 because that's what they've used since 1984 (wish I hadn't experienced this).
> Otherwise you end up with people insisting on using DOS for enterprise applications in 2024 because that's what they've used since 1984 (wish I hadn't experienced this).
DOS as a tech stack is ludicrous (unless you're working in embedded/industrial scenarios, but hey, at least you can command a significant paycheck in exchange for that level of horrors).
A terminal user interface however? Absolutely not, particularly if your userbase often has literally decades worth of muscle memory. Banks, governments or travel agencies (see an example for Amadeus here [1]) live and die with the "legacy" TUI.
If this were a data entry tool, I'd agree on the TUI part. It's a data processing tool and a bottleneck. There have been several replacement efforts, all canceled (your tax payer dollars at work, these were government contracts long before my time on the system). It should be 1-3 networked services (common data store, different tasks making 3 services a logical outcome rather than 1 service doing too many things but it's small regardless so either way is reasonable).
The customers in this case feared change too much to accept anything that wasn't exactly what it was replacing. Which meant that there was no point in doing the work.
> The customers in this case feared change too much to accept anything that wasn't exactly what it was replacing.
Now, the interesting question is, who gave the pushback. Particularly in government, change in the "status quo" is feared because it would force the really old ones to deviate from a routine they have been living for decades - they'd have to basically re-learn their jobs, all for a few years before retirement. Others don't want any kind of improvement because then the young people, used to modern technology, would go and run circles around them regarding productivity, which would impact the "oldtimer" careers negatively.
And in some rare high-stakes cases, the system is mission critical and any kind of deviation from the previous way of working can literally send people to prison or cause massive monetary damage. Here, everyone wants to keep the status quo in order to not awaken the beast, and the incentive grows really powerful when it's some old mainframe system with tons of undocumented implicit assumptions and edge case covers.
Government work in my experience is a whole bunch of negative incentives from all levels thrown into a blender.
Reportedly close to 100% turnover in the actual userbase (versus the paying part of the government) every 2-3 years. So it's not about deviating from routine, but those users aren't given a voice in the change efforts.
This is an issue with the program management which is very common in government. They are too change averse even when change is necessary (to satisfy their own requirements in this case, the DOS box can't connect to the network but they want the system on the network).
> particularly if your userbase often has literally decades worth of muscle memory
My experience from informally inquiries shows that users value fast TUIs regardless of the experience they amounted. Even the young mainframe and (rarer) Clipper users I talk to report preferring these systems over newer, web-based ones.
I worked in the travel industry, and the old TUI was much more efficient at several tasks than any GUI that we could provide.
As developers we should show some sympathy, but some of my co workers didn't see it that way. They complained about the old TUI, while they themselves preferred using a terminal for git and other tools that actually have most of the functionality that they need in a GUI
> As developers we should show some sympathy, but some of my co workers didn't see it that way. They complained about the old TUI, while they themselves preferred using a terminal for git and other tools that actually have most of the functionality that they need in a GUI
At least for git, I prefer a terminal as well. git is easy enough to shoot yourself in the foot when you're using it in a terminal, but git GUIs tend to abstract away so much for anything more complex than "git commit/push/pull" that it's even easier, and way harder to recover from.
They are, however, implicitly saying it with the position that "I don't care as long as it works."
Javascript is available on every modern browser. It's not going away any time soon. Migrating to some other frontend language, no matter how demonstrably better it is than Javascript, is a Herculean effort. Actually, it's more of an Atlassian effort, since it would require practically an Earth-wide shift.
I'm not sure I've met a consumer of diagrams who ever really wanted to know what's going on, so much as they wanted to have their pre-conceptions validated for the brief time they're interested in the problem.
I have had too many conversations with managers who want to talk about "pipelines" but then you ask how the system is handling dropped or failed messages and get a deafening silence or "just don't worry about it it's rare".
People love arrows showing the "direction" data is going, but do things like assume that direction means "security" (it doesn't - any channel other then a full on data-diode is 2-way) or implies other operations (i.e. about storage).
Compiling to Javascript is a really low bar: all Turing-complete languages are equivalent, one can be translated to another. You could have a Lisp or Haskell compile to Javascript. The problem is: everyone has their favorite language, so such ecosystem would be fragmented, and Javascript is the lowest common denominator.
I disagree. Since Covid pushed us into WFH, the number of slide decks I am subject to is trending downwards. Most of my presentations these days happen in Miro, or Notion, or over a screen share of whatever we’re doing. There’s still the odd PowerPoint but they do have their uses as much as I dislike them.
The article does indeed argue that we need smarter ways to create diagrams - so that devs don't have to manually create / update them and other teams get the info that they need.
Saving time and effort for devs, while making dynamic visualizations with different levels of detail (i.e. more and less technical for different audiences) is possible.
It's not so hard to do static and tracing-based analysis to capture all the calls that various systems make between themselves. Similarly, it's not so hard to graph all of the metrics of a system.
That's not really all that useful. Diagrams, like other forms of documentation, are a format for communicating something to the reader. It means that they should spend more space on the important flows rather than the exceptional ones. They'll explain the meaningful parts of a sequence rather than just giving a series of function calls. The various tools we have for doing this (boxes showing what systems talk to what, sequence diagrams, database schema diagrams) provide a rich enough language for that communication.
Death star diagrams are bad because they spend a bunch of time and effort to convey one piece of information: "there are a lot of systems here" rather than anything actually useful for someone attempting to navigate a specific part of the system.
I think at least part of the problem is that todays "agile" methodologies lure people into almost completely neglecting analysis and design, as if these steps are forbidden in agile. So for many dev teams I have seen, agile basically means "we have no plan".
I’d characterize it as the degradation of the product manager role. Too many project managers get promoted to product but don’t really know what that job entails so they just project manage harder.
And to a project manager “I’m gonna take a week and think about it” is not an acceptable answer.
I agree that many teams, in an effort to move away from waterfall development and Big Design Up Front, have gone the opposite way and completely skip system design. Which is a mistake, because you need some upfront design.
As Dave Thomas said: “big upfront design is dumb. No upfront design is dumber”.
I believe the answer is to have a single model of everything.
You can build the model from whatever: Draw it with a GUI, query your server infrastructure, analyze binaries, collect it from databases, parse it from documentation, process your issues, etc. The important points is to merge all the information into a single model.
You cannot represent this model on a screen in any meaningful way. Instead you can only get certain "views" into it to understand an aspect or answer specific questions.
Does it exist today? Many tools and processes have been developed to achieve it. For example, "Model-based systems engineering" is a very formal approach. I have yet to see it realized in practice though.
I'm pretty sure it isn't a single tool or process though. By definition it integrates with nearly everything and that means a lot of customization. You won't get it get it off the shelf.
This is something I that’s been living in my head for a few years now—and no, it does not exist yet. I’m also convinced the only way to get it right is to have a single graph, modelling the entire system, and apply filters on the graph to get only the nodes you’re currently interested in, then have different diagrams produced from that as the output. How else would you describe the flow of packages from the internet through the firewall, what the logical network looks like, and which physical location things are located at? These questions all interfere with each other on a conceptual level, yet are all conceivable using different attributes of the same connected graph nodes.
It’s very complex, very interesting, and lots of work.
I've had the same thought but then gave up because it's not even a single graph: we assume it's a single graph because some parts of "concept - connection - concept" tuples...but that fails to capture the reality that the aggregate behavior of a system can implement a wholly different behavior.
A practical example of this would be neural networks in AI: which collectively implement functions much greater then the individual links.
I have the same conclusion, if you capture the model as a graph you can then derive from the graph as many visual representations as you need, for instance, this tool can generate archimate context diagrams from the graph data ( mostly by doing "ontology translation" )
https://github.com/zazuko/blueprint
The problem with "a single model of everything" is that it is an inferior tool.
It's a drawing or diagramming tool, but the stand-alone diagramming tools are better.
It's a text composition tool, but word processors are better.
It's a code writing tool, but IDEs are better.
In every single thing it tries to do, a specialized tool is better. So this single tool needs to be close enough to the best in every category in order to not be a boat anchor holding you back. So far, nothing has come close.
Nothing stops us from using specialized tools in this case. They just need to not work on "single source of truth" of their respected domain, but on a projection of a single artifact into relevant dimension. Devil's in the details, of course, but at least with IDEs I can tell with certainty, based on my experience, that sticking to editing directly the single source of truth plaintext codebase is wasting much more time and cognitive effort than IDEs are saving us.
Yes, that is good point. It might be reason why it fails in practice. Whatever view one produces from a single model, it will look crappy and cheap compared to the Powerpoint slide of someone else. It might be more truthful and more up to date, but it isn't as persuasive. Persuasion is what a presentation is ultimately about: It should influence the behavior of the audience.
Not sure, but IMO a dataflow diagram from good ol' SSADM got a lot of the way there. Unlike a lot of the modern techniques which model (groups of) deployed components, DFDs were strictly logical and functional and included data stores and users as well as functional components. So it was possible to model data in flow and at rest, and the characteristics of each, including where it crossed system boundaries.
IMO this was the best diagram format to get an overall view of the system, or reveal which areas needed further analysis.
It sounds to me like what you're describing is a single *tool*, not a single model. Is that possible?
I agree that we need multiple views, but it isn't just a matter of filtering part of a single view - it's also different fundamental primitives that are needed to communicate different kinds of ideas or aspects of a given system.
To me, this seems parallel to the evolution of LLMs into multi-modality by default. Different queries will need different combinations of text, heat maps, highlights, arrows, code snippets, flow charts, architecture diagrams, etc.
It's certainly not easy, but it's exciting to consider :)
> You cannot represent this model on a screen in any meaningful way. Instead you can only get certain "views" into it to understand an aspect or answer specific questions.
There is Capella which is a tool (software) and method to design systems. It used the Arcadia method (mbse). It is quite extensible.
I don't understand how you can write an article about system design without mention to enterprise architect or Capella. I've read this as "we need to come up with a solution that already exists".
Almost all of his complaints stem from using generic drag-and-drop diagramming tools. A modern take on system diagramming, like Ilograph[0], solves most of these issues (IMBO).
There are quite a few tools cropping up trying to solve this problem. Multiplayer.app is one example - they use OTel to gather distributed traces from your system and ensure you automatically get notified when there's drift.
>Today, we have the technology and knowledge to create tools that prevent developers from wasting valuable development time deciphering static, outdated diagrams,
One issue is that diagrams in general are pretty universal. As long as your tool can makes shapes and connect them, you can use it for any kind of architecture. For example it will work flow charts for 50 year old Fortran to UML from the 90s to microservices diagrams from now and I bet to whatever is common 50 years from now.
>> >Today, we have the technology and knowledge to create tools that prevent developers from wasting valuable development time deciphering static, outdated diagrams,
"Outdated", imo, is subjective to the content and is bias'd by the time since creation.
As long as wireframed thought is the bare skeleton, the diagrams are merely a read-only user interface to communicate to a particular end-user audience.
I don't think the issue is with diagrams per se, but with how we create them. They are super helpful in conveying meaning but why do we need to create and update them manually?
Thank you everyone for reading my article! I’m the author, Thomas Johnson.
This article stems from my frustration with the typical approach of asking, “What diagramming tool should we use?” instead of addressing the root problem: the need for up-to-date, easily accessible system architecture information.
That’s why I co-founded Multiplayer. We focus on automating the creation and maintenance of system architecture diagrams and creating a single source of truth for system information. This includes your individual components, APIs, dependencies, and repositories.
We’re language and environment agnostic and you can start with a napkin sketch or a photo of your whiteboard. And this is just the start, we have many plans for how to evolve system design tooling including supporting popular integrations and models like C4.
If every acknowledgement of the(or any imo) issue at hand, was accompanied with a minimally viable solution (even if it doesn't work) -- as the defacto standard for providing critical feedback..
As long as some effort is put in towards a countering possible solution in response-- I truly feel that we would be closer to an end solution.
Check out https://www.multiplayer.app/
The author is one of the co-founders, but I understand not wanting to just promote his tool and just have a conversation about the problem.
There is a potentially interesting standard many may not know - IEC 61499 - Standard for Distributed Automation, first ed 2005!!! and related to PLC type controllers.
It was meant to be the next step forward from IEC 61131 that described the five PLC control languages, but far ahead of it's time.
61499 introduced the concept of an executable specification, and went on develop tie ins with XML and OPC. The executable specification is a particularly interesting idea, effectively emerging as no code these days
But two decades on is still only starting to get traction, seemingly because it was so far ahead of it's time, and the target audience was potentially mostly only just branching out into software because they were now programming ladder logic, instead of designing panel boards with hundred of relays and timers.
But, it is worth a look for some ideas in the space, particularly the executable specification concept (you know this can only lead to a DSL of some kind), but also including their diagramming that looks a lot like UML but maybe works better, along with 4Diac software package, consideration of how some of the more esoteric aspects of OPC fit into all this, and more slightly obliquely, TLA+.
In 1971, M. Bryce and Associates marketed PRIDE, the first commercial software methodology for general business use. It was originally an entirely manual process using paper forms, but it was already far more comprehensive than anything called a "methodology" we use today. For example, in PRIDE, every artifact of the systems design and implementation process -- each high-level requirement, each software module, each database table and column, and more -- is assigned a tracking number and extensively documented as to its purpose in a unified knowledge repository. This was decades before Git or JIRA, and at first it was all done by hand, but not for long.
In the 80s, they marketed PRIDE/ASDM, which combines PRIDE with Automated Systems Design Methodology, a suite of system design tools written in COBOL for mainframes. Far from being mere diagramming tools, they assisted in all aspects of an information systems design from initial requirements down through coding and database management. A key component of ASDM was the Information Resource Manager (IRM), a searchable central database of all of the information artifacts described above along with their documentation. Another component was Automated Instructional Materials (AIM), the online documentation facility, which not only provided instructions on how to use the system, it also provided step-by-step directions (called "playscripts" in PRIDE-speak) for each member of the development team to follow, in order to see a business system through from abstract design down to software implementation and deployment. Provided the directions were followed, it was next to impossible to screw up a system or subsystem implemented through PRIDE/ASDM.
This level of comprehensiveness and clarity is the gold standard for business IS development. And it seems to be nearly lost to time.
"every artifact of the systems design and implementation process [..] is assigned a tracking number and extensively documented", "it also provided step-by-step directions [...] for each member of the development team to follow"
Maybe this made sense back in times of COBOL, but with the modern high-level languages this sounds like bureaucratic hell, the worst kind of micromanagement.
Well, what's happened in the years since is that programmers have been in the driver's seat when it comes to IS, and that's resulted in a lot of sloppy, disorganized thinking in the field. Programmers in general fancy themselves as artists and resist organization and discipline -- as true today as it was in the COBOL days -- so it's no surprise that a method based on improved process control seems like micromanagement. But that level of discipline is needed in order to do the work right. Ultimately it should come down to self-discipline.
Unfortunately, IS has morphed into a programmer-centric field, which leads us to Agile, a family of methodologies by programmers for programmers. Agile mythology has it exactly backward: Agile is not the thing that saved us from waterfall. Agile was the "traditional" software development process PRIDE was created as a response to: start programming right away, deploy rough versions to production, and keep iterating with code changes until you have something that kind of, sort of works. Compared to this, PRIDE delivers improved productivity and quality, reduced costs, and reduced administrative overhead. I've never been on an on-time, under-budget Agile project, and one of my former employers basically aborted their entire Agile transformation.
Once the industry has shaken off its Agile wine, especially as profitability becomes more important in the post-ZIRP economy, software development will look more like PRIDE than like anything we see today.
It is my belief that Agile arrived because it's the best way to work when managers, "architects", and programmers who cannot estimate, cannot communicate well, and don't have a good idea what are they they doing. After all, there is no point in daily standup if everyone can recognize they are stuck and ask for help; and there is no point in group planning if there is someone who knows the system and has a good idea about each team member's capabilities.
That problem is not going to disappear if you are moving into bureaucracy-heavy development process. OK, you've removed programmers from driving seat.. so who is going to be sitting there now? Some sort of "system architect", someone who does not program themselves, but somehow knows about all possible problems and can design great systems? What will happen to all the plans when one discovers that a database that architect mandated is full of bugs when used in the particular mode the project needs? Or the object store cannot perform fast enough and it's time to switch to distributed architecture?
I don't think PRIDE can work on new & innovative projects.. Sure, if your org is specializing in building identical e-commerce websites, or doing one ERP system after another, then by the 3rd time you'll know enough to be able to plan most of the thing and PRIDE maybe can make sense. But this niche is shrinking all the time - if there is really a need for many identical software projects, someone will make a SaaS and most customers will flock to it because of much lower price.
> It is my belief that Agile arrived because it's the best way to work when managers, "architects", and programmers who cannot estimate, cannot communicate well, and don't have a good idea what are they they doing. After all, there is no point in daily standup if everyone can recognize they are stuck and ask for help; and there is no point in group planning if there is someone who knows the system and has a good idea about each team member's capabilities.
You're right. PRIDE can't make a silk purse out of the sow's ear of unclear requirements, poor communication, and people not knowing what's needed. But... neither can Agile! You have to hire people with the requisite skills. You wouldn't hire someone who couldn't write a line of code as a programmer, would you? So why hire someone who can't communicate as a systems analyst or manager? Agile methods at best might converge on an adequate solution via random walk or gradient descent, refactoring and rewriting code in a manner that annoys the user less and less. So, granted, it may indeed be the best way to produce Hamlet when all you have is ten thousand monkeys with typewriters. But if Hamlet is what you wanted, why use the monkeys and typewriters in the first place? Better to build the right thing, the right way and that's what PRIDE is designed to do.
> That problem is not going to disappear if you are moving into bureaucracy-heavy development process. OK, you've removed programmers from driving seat.. so who is going to be sitting there now? Some sort of "system architect", someone who does not program themselves, but somehow knows about all possible problems and can design great systems? What will happen to all the plans when one discovers that a database that architect mandated is full of bugs when used in the particular mode the project needs? Or the object store cannot perform fast enough and it's time to switch to distributed architecture?
Back in the day, there used to be a profession known as the systems analyst. It was their job to analyze the information needs of the business, produce a report on what the current state of things was and what the units of the business actually needed and could provide in terms of information, and propose a design for an information system that would ensure that the right workers had the right information at the right time. An information system is not software and it does not necessarily run on a computer; any such system could plausibly be designed as a manual process that uses forms and other paperwork. It's just much more efficient to use computers when they're available, so information systems incorporate computers and software to greatly reduce tedium and increase productivity.
The systems analyst does not "know about all possible problems". Rather, they ask the people in the business what their specific problems are and design a system to solve them. Systems analysts also do not make decisions like which DBMS to use. That is a concrete concern; and information systems design happens in the abstract. (Tim Bryce says that systems analysis is "logical" whereas programming is "physical"[0]. Same thing.) Systems analysts collaborate with programmers and DBAs to determine what's technically feasible, and they make adjustments accordingly depending on the feedback from these more technical personnel.
Systems analysis is kind of a forgotten field nowadays. Rather, sloppy thinking in business has promoted the idea of the genius programmer who can solve business problems through writing lots of code. As a result, programmers have been promoted to systems analysis roles, and the result has been a disaster. Systems analysis requires a big-picture perspective with relatively little focus on technical detail. Understanding the business as a whole, and the information flows of that business in the abstract, is paramount. It also takes people skills: you have to know the right questions to ask, and when "the users don't really know what they want" (a common complaint of programmers, one which they believe negates the whole point or need for requirements elicitation), you have to figure that out as well. For example, a salesperson may request an itemized list of everything in inventory, when what they actually want is an aggregated breakdown of inventory by product category. So then you have to drill down and ask the sales team, for what purpose do you intend to use this information? And based on that, propose a better solution. Assigning a programmer to this task will result in disaster because the programmer won't be a "people person". They won't know which questions to ask, nor how to interpret the answers, because they don't think in terms of the business but rather in terms of technology. There's a reason why systems analysis was historically separate from programming.
Systems analysis should take up the bulk of an IS implementation project. The creators of PRIDE say that ideally, 60% of the project's time and budget should be taken up by systems analysis and only 15% by programming; furthermore, that there should be more analysts than programmers on a project. This is because once you've really nailed down and defined what the information requirements of the business are and how that information should flow through divisions, departments, etc., programming becomes just a straightforward act of translating those flows into code. (Back in the day, Tim Bryce wrote an essay called "Theory P" in which he described programmers as "just translators", which rustled a lot of programmer jimmies but was ultimately correct: under PRIDE, there is literally nothing else for programmers to do!) It's the analysis that's the hard part: engaging the relevant personnel in round after round of discussions and stepwise refinement of the original system or subsystem design until all can come to agreement that the design would provide the right information at the right time to the right people. Only then does coding begin.
The Agile way is just the opposite: programmers take a first stab at implementing a system in code, put it in front of users, identify the ways the users get frustrated, go back to the code and make a guess at changes that need to be made to better satisfy the users, roll that out to the users, repeat ad nauseam. It's all guesswork, stabbing in the dark. Very little thought is given to the needs of the business or how information is supposed to flow through the business to get where it needs to go. This is literally what Milt Bryce described as the "traditional" method of software development in 1982[1]. Agile came first, and it sucked!
> I don't think PRIDE can work on new & innovative projects.. Sure, if your org is specializing in building identical e-commerce websites, or doing one ERP system after another, then by the 3rd time you'll know enough to be able to plan most of the thing and PRIDE maybe can make sense. But this niche is shrinking all the time - if there is really a need for many identical software projects, someone will make a SaaS and most customers will flock to it because of much lower price.
I think many companies vastly overestimate the amount of innovation they need to do, and specifically the amount of technical innovation they need -- most businesses need more innovative systems designs, not more innovative programming techniques. Theoretically, 99% of businesses could handle all their information systems through mainframes running COBOL on green screens. Not that they should -- modern programming techniques have made programmers' lives easier and more productive after all[2] -- but they most certainly could.
From a software standpoint, most businesses need CRUD operations, data handling, reports, and analysis -- and not much else. The key is to identify what the informational needs of the business are before you commit to a technical path. The company I worked at that abandoned their Agile transformation was a very large, old, established company that was attempting to cosplay as a hip, innovative tech startup. They took out an entire floor of a WeWork. Very "how do you do, fellow young people?" moves. Then the pandemic hit, and then the ZIRP ended, and they started to take a long hard look at the expenses incurred vs. the actual benefits of these moves, and started rolling them back. What they really needed in the end, was fresh leads routed to salespeople, and/or existing customers routed to support personnel. And you're right -- a lot of this is stuff they could have done with SaaS like Salesforce.
But here's the thing: PRIDE is for building information systems -- not just software. It can help you identify what you need to build and what you can just buy to implement a given system. It probably would've saved this company quite a bit of time, money, and developer effort to have done more upfront systems analysis and identified the parts that could just be done with an off-the-shelf solution (as well as the parts that actually did need bespoke coding). But they were caught in the microservices, cloud, and Agile hypecycle, and those were the hammers with which they were going to drive the information-systems screws into place.
I feel like https://plantuml.com/ gets close to what I want by being able to make diagrams with code, but for system design what I really want is to be able to have diagrams generated directly from the code itself, maybe with some extra comments/annotations that help it along.
Commented this above, but I actually built a site to auto generate up-to-date interactive visual diagrams for codebases.
It's pretty plug&play, using static analysis & LLMs we generate documentation + an interactive system diagram you can recursively explore.
Product people will not. So we have the inglorious situation of 2 sets of diagrams and notes on the translation between them. If the problem was easy to solve, it wouldn't be something you can express in a sentence, while simultaneously asserting how easy it is.
Some people do not adequately constrain the problem while others overly constrain it, conceptually.
BPMN was created for product people. UML use case diagrams were created for product people.
I don't see the argument why product people don't need to learn this. They work in a software organization, they build software, they should be expected to know the diagrams pertinent to them.
> I don't see the argument why product people don't need to learn this.
They need to, but in my experience, they simply don't want to. They want to get the credit for the product, but they want to wave their arm and just have its features appear without actually specifying them.
I understand because it is hard, exacting work. But as it is specificity just gets pushed from product people down to the lowest-level person responsible for implementation.
Honestly ALL diagrams are a dead end, any system where there is no single source of truth will be a big hassle to maintain and therefore will not be updated which makes the whole thing useless.
The only which I can see working as it has already proven itself for certain tasks is something like the blueprints from the unreal engine but where each node represents something big enough to warrant its own node without any no low level details like variables, conditionals, loops etc. Otherwise you end up with literally spaghetti code.
This is an excellent point. You need to pull all the information about a system in a single place so that then you can choose what level of abstraction or deep dive into the details you need.
Projects like Multiplayer.app are in their early days, but I can see the potential of focusing on concentrating this info and automating the maintenance of docs and diagrams.
Google Slides and Drawings (under + > More...) are great tools. They support the diagramming objects, connectors, shared collaboration, and integration with the other Google Doc tools. They also have great history and diff tools as well as team comments, with actions.
.. but they are manual. It's great to have history, diff, comments, etc. But why do I have to spend time manually creating a diagram and updating it every time I add a new dependency when it can be automatically done for me?
To take this a different way, Devs need AI-driven system design tools, not diagramming tools. The work to automate the process to define, code, deploy, and document tech/app solutions using GenAI is arguably well underway and the days of humans producing layers of boxes and arrows documentation is thankfully numbered.
More directly, the need for the special snowflakes and the ensuing complex system design processes will be reduced as we drive towards more automation N(A) frameworks.
If we had a complete and evolving set of requirements for any reasonably complex system, we wouldn't really read them once they no longer match the system we have delivered.
Systems and requirements go hand in hand but the reality is requirements are eventually codified, and after a point only the system matters because the system is the reality and the requirements were just the formal inspiration.
I would love to see automated requirement analysis and verification from code.
You need to use diagramming tools that are meant for complex systems. Almost all of the complaints in this article are caused be using generic drag-and-drop diagramming tools.
Excalidraw is great for brainstorming and sketching. But I don't exclude pairing it with a tool that also automatically shows me all the metadata of system components, automatically detects architecture drift, etc.
Also GUI design tools like the WinForms designer but for more modern toolkits please. VB/Delphi/WinForms were such a joy to design and XAMl (let alone web frontent) is such a pain to write.
I can’t find the right video at the moment but Simon Brown (creator of C4) gives a great talk about creating his DSL, Structurizr, for C4, which he developed during COVID lockdown (if memory serves). There are many videos on YouTube of Simon talking about “C4 Models as Code” so I’m sure any one of those will suffice.
The focus is on creating the model of your system architecture, from which the diagrams you extract are a specific projection of that model. Rather than a diagram be the main artifact. It’s a simple but very powerful concept that I’m always surprised isn’t more widely used.
Structurizr models can also be exported to display as ilograph diagrams, mermaid diagrams and more. Also very much worth a mention is icepanel, a lovely tool for architectural model that implements the C4 model heavily.
I saw Simon talk at a conference in Sydney about 10-15 years ago and heard about C4 for the first time in that talk, it’s been one of the most influential talks I’ve been to in my career as it made a lot of fuzzy things in my head all start to come together in a way that just made sense.
https://c4model.com/
https://structurizr.com/
https://www.ilograph.com/
https://mermaid.js.org/
https://icepanel.io/