I am thinking about Tesla with Dojo and Tenstorrent.
Both have a similar architecture (different scale) where they dich most of the vram for a fabric of identical cores.
Instead of being limited by the vram bandwidth they run at the chip speed.
Nvidia/Intel/AMD/Apple/Google and others surely have plans underway.
As the demand for AI grow (now clear that there is a huge market) I think we will see more players enter this field.
The landscape of software will have a dramatic shift, how much of the current cpu running in datacenter will be chips for AI in the future, I think it will be most of them.
Both have a similar architecture (different scale) where they dich most of the vram for a fabric of identical cores.
Instead of being limited by the vram bandwidth they run at the chip speed.
Nvidia/Intel/AMD/Apple/Google and others surely have plans underway.
As the demand for AI grow (now clear that there is a huge market) I think we will see more players enter this field.
The landscape of software will have a dramatic shift, how much of the current cpu running in datacenter will be chips for AI in the future, I think it will be most of them.
Jim Keller has a few good interviews about it.