DenseNet is a pretty big deal, and I am surprised some version of FCN/U-Net is not there, or siamese networks are not there. Capsules are still quite young and Transformer pretty specific, don't know that they have have a large impact yet.
The SVM one is a little odd I think? Should all linear leaners (including naive Bayes, logistic regression, perceptron and SVM) - not share the same architecture in terms of nodes? As far as I understand it, the difference is rather in how the discriminator is learnt?
I love how they have clearly illustrated how the actual details of nodes themselves can differ greatly between "network types".
As a non-expert I have found this distinction is usually more buried in general descriptions, which tend to initially focus on topology and behaviour. I prefer to know those details upfront so I can then map the behaviour as an emergence as I learn about the behaviour, otherwise when reading about the behaviour it sounds more like magic.
No.
The hard part is how to deploy without cloud and without GPU.
Many papers show how to quantize weights from float to binary for super efficient and faster inference, but nobody did open source their deployment ready ARM CPU kernels and FPGA HDL code on how to do so.
What? Surely deployment is by far the easiest part of machine learning? The learning is the hard part. If you are resource constrained, then learning a small/efficient model is even harder, obviously. As for binary math, usually all you need is the standard bitwise operators and popcount.