It was a master-detail “form”, rich-formatted financial records in the left pane and svg-heavy graphs for attributing records to edges on the right. Already heavily filtered on both sides, and required to be navigatable without constantly changing subfilters.
Estimating, every left row could consist of 15-20 vnodes and every graph of around 50+ min. I think I’ve seen 12-15k vnodes on average day, depending on how much data remained unmanaged and how structured the right side was in the middle of experiments.
surely there must be better ways of handling the requirement. I'm not sure a user can actually consume tens of thousands of dom nodes.
We tried windowing the data, but that simply moved delays to operators. They don’t consume it all at once, but they have to detect groups by using “natural intelligence”. The fixed process that spans multiple entities and liabilities wouldn’t allow to automate it further. Sometimes it’s what it is, welcome to real world business complications. As I said, it’s not mithril’s fault at all, but something to consider if you have to.
Humongous DOMs eventually stop scaling even without any javascript on the page (e.g. look at how long it takes to load the ecmascript spec), so it's definitely important to account for DOM size early in design.
With that said, for mithril specifically, there are a few different techniques that I've heard people use to avoid overly slow diff times:
- occlusion culling (basically render only list items that are actually visible on screen)
- islands (basically mount a sub-app onto a vnode.dom so that it renders independently without forcing a rerender of the parent app; this takes advantage of the idea that data-down, events-up is a pattern that works across sub-app boundaries)
Yes, good old model-(controller implements datasource)-view-cellview from any native toolkit. Sadly, to implement that in html, which doesn't have any primitives for it, means that you have to combat both NSScrollView/GtkScrolledWindow from scratch and html/css complexity. That alone is a project much bigger than some enterprise fintech toy I'll ever dare to approach. Maybe some day web will reinvent native cells and cell-rendering containers, who knows.
Agreed. Charts and large SVGs, etc are generally best rendered independent of the vdom. I’ve seen React and Preact both crawl under similar setups, so this isn’t a knock on Mithril, albeit it does appear that Mithril is a bit slower than Preact with large diffs.
For large immutable DOM trees like CSS-stylable SVG graphics, I've seen people use m.trust, which makes the diff of the entire SVG tree as cheap as a single string comparison.
For complex charts, I think deferring to something like d3 might make more sense than a vdom based implementation since d3 provides better domain-specific APIs.
Estimating, every left row could consist of 15-20 vnodes and every graph of around 50+ min. I think I’ve seen 12-15k vnodes on average day, depending on how much data remained unmanaged and how structured the right side was in the middle of experiments.
surely there must be better ways of handling the requirement. I'm not sure a user can actually consume tens of thousands of dom nodes.
We tried windowing the data, but that simply moved delays to operators. They don’t consume it all at once, but they have to detect groups by using “natural intelligence”. The fixed process that spans multiple entities and liabilities wouldn’t allow to automate it further. Sometimes it’s what it is, welcome to real world business complications. As I said, it’s not mithril’s fault at all, but something to consider if you have to.