They are doing such good work. That was amazing to read and I learned something about finding hidden instructions by thinking about how a HW engineer would encode bits.
Hey great work you did there. I'd also like to recommend nand2tetris and its companion book Elements of Computing Systems. If you'd like to dig deep into how a cpu is actually elemented via HDL.
I agree this would be a great addition. If I had a minor complaint about the work presented here, it's that it starts "in the middle"; pushing down to CPU OpCodes without describing how the ML codes are actually defined. It's typically easier to understand if you start at either the very top (like how does "Hello, World!" actually get executed?) or very bottom (though I'd argue you could stay above the physics of semi-conductors, at the chip level).
There was a talk by a researcher where he was saying that they could see the progress being made on chatgpt by how much success it had with drawing a unicorn in latex. What stuck out to me was he said that the safer the model got the worst it got at drawing a unicorn.
My understanding is the performance benefits come from the locality of the data. Usually in an array for each component. Whereas in a OOP system, your objects would be scattered throughout the heap.
Wired Magazine during the years 1995, 1996 and 1997 had something a bit magical about it, the way I remember it. You could sort of feel, while reading it, that it was a harbinger of great, great things to come, both from technology in general, and the fusion of personal computing and the internet in particular. It generally was a pleasure to read the magazine in those days.