Resist this temptation. It is better for the true and false branches to always appear in the same order than to permute things to avoid edge crossings.
I’m not sure how far you can push the generality of the iongraph algorithm. My gut is that it could be made to work somewhat well for any control flow graph with reducible control flow, but I expect there would be many complications.
To get more precise, we benefit from knowing the nesting depth of each block. This plus reducible control flow is enough to reliably find loops. We also know exactly which edges are loop backedges; it’s easiest when these are explicitly annotated but perhaps it would be possible to derive that info from other loop info. (In Ion we have a dedicated “backedge block” per loop, which makes it obvious what we should do, but which other compilers likely wouldn’t have.)
There are also a bunch more available information channels that this doesn’t use that could communicate more information. Color, shape, pattern line type, “backdrops”/groups could all be implemented to provide additional visual clarity on any range of parameters you might care about.
I used to use the dcc application Nuke and it had some very complex graphs but the different nodes were all color coded so zooming out you could get a good idea of what was happening where just from the average color of a section.
It didn’t have an auto layout algo as good as this though.
What I really like about the article is the reflection on the limits of optimization. Optimization gets you mostly OK results most of the time, but there will always be pathological cases where optimization gives you bad results, and there’s often room for drastic improvements of you’re allowed to make stronger assumptions.
The part that is timing out is actually the JS interpreter, not the graph viewer. It’s a total hack to get SpiderMonkey running on the page at all.
The full Frankenstein stack is: SpiderMonkey compiled in arm emulation mode, to a WASI 0.1 module, adapted to a WASI 0.2 component, transpiled to the web with jco, running in some random WASI shim.
We do this because the JS runtime needs inline caches to be filled out before optimization, which requires an JIT and actual execution of machine code. Otherwise you just get a graph full of Unreachable. Frankly I’m amazed it works at all.
I agree that we haven’t gained much yet from looking at large graphs. Usually we can reduce any problem of interest to something small. Still, Graphviz produces very ugly results even for small graphs, whereas this is where iongraph shines.
To be clear, what I think makes the latter graph more readable is particularly that the wires are easier to follow. Yes, it’s subjective, but backed up by my own personal experience. Long term I think we can add more interactive features to help us in such cases, e.g. search and dimming irrelevant wires.
Hello, I am the engineer in question. I am not actually super familiar with the details of the build system, but from when I saw, the main issues were:
- Lots of constant-time slowness at the beginning and end of the build
- Dubious parallelism, especially with unified builds
- Cargo being Cargo
Overall it mostly looks like a soup of `make` calls with no particular rhyme or reason. It's a far cry from the ninja example the OP showed in his post.