Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you tell us what you have learnt about the performance of these tools? E.g. when you write about different approaches to building charts with d3 and react here (https://www.react-graph-gallery.com/about), is approach 2, which directly interacts with the DOM, more performant than approach 3, which has to go through React? Also, what, approximately, is the upper limit of a dataset size after which the React+d3=>svg approach begins to stutter?


In my experience (author of ReactForDataviz) the React renders SVG approach starts struggling around 10,000 to 50,000 nodes depending on hardware and what kind of calculations you’re doing.

Directly manipulating the SVG DOM with D3 (wrapped in a React blackbox) goes to around 100,000 nodes. After that you really have to start using canvas or webgl.

At those numbers you also start getting huge performance gains by mutating data and doing less copying. Everything you learn in “how to do react” tutorials quickly starts being wrong when your arrays have thousands of elements.

Edit: there’s also a lot you can do with how things are nested. The flatter your DOM, the fewer nodes React has to touch for every update

Here’s a fun stress test I built a while back that goes up to a million nodes rendered by React https://swizec.com/blog/a-better-react-18-starttransition-de...


This is a very good question as performance is a key struggle in Data Visualization. Swizec answer below is great.

I will write more about performance soon. But using several layers of canvas is definitely the way to go in most situation to put it in a nutshell




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: