It will, but that's also an x/y problem. We're seeing ridiculous energy costs and supply chain issues for crypto and AI because they're using the wrong architecture: GPU/SIMD.
I got my computer engineering degree back in the 90s because superscalar VLSI was popular and I wanted to design highly-concurrent multicore CPUs with 256 cores or more. Had GPUs not totally dominated the market, Apple's multicore M1 line approach with local memories would have happened in the early 2000s, instead of the smartphone revolution which prioritized low cost and low energy use above all. We would have had 1000 core machines in 2010 and 100,000-1 million core machines for 2020, for under $1000 at current transistor count costs. Programmed with languages like Erlang/Go, MATLAB/Octave, and Julia/Clojure in an auto-parallelized scatter-gather immutable functional programming approach where a single thread of execution distributes all loops and conditional logic across the cores and joins it under a synchronous blocking programming model. Basically the opposite of where the tech industry has gone with async (today's goto).
That put us all on the wrong path and left us where we are today with relatively ok LLMs and training data drawn from surveillance capitalism. Whereas we could have had a democratized AI model with multiple fabs producing big dumb multicore CPUs and people training them at home on distributed learning systems similar to SETI@home.
Now it's too late, and thankfully nobody cares what people like me think anyway. So the GPU status quo is cemented for the foreseeable future, and competitors won't be able to compete with established players like Nvidia. The only downside is having to live in the wrong reality.
Multiply this change of perception by all tech everywhere. I like to think of living in a bizarro reality like this one as the misanthropic principle.
I got my computer engineering degree back in the 90s because superscalar VLSI was popular and I wanted to design highly-concurrent multicore CPUs with 256 cores or more. Had GPUs not totally dominated the market, Apple's multicore M1 line approach with local memories would have happened in the early 2000s, instead of the smartphone revolution which prioritized low cost and low energy use above all. We would have had 1000 core machines in 2010 and 100,000-1 million core machines for 2020, for under $1000 at current transistor count costs. Programmed with languages like Erlang/Go, MATLAB/Octave, and Julia/Clojure in an auto-parallelized scatter-gather immutable functional programming approach where a single thread of execution distributes all loops and conditional logic across the cores and joins it under a synchronous blocking programming model. Basically the opposite of where the tech industry has gone with async (today's goto).
That put us all on the wrong path and left us where we are today with relatively ok LLMs and training data drawn from surveillance capitalism. Whereas we could have had a democratized AI model with multiple fabs producing big dumb multicore CPUs and people training them at home on distributed learning systems similar to SETI@home.
Now it's too late, and thankfully nobody cares what people like me think anyway. So the GPU status quo is cemented for the foreseeable future, and competitors won't be able to compete with established players like Nvidia. The only downside is having to live in the wrong reality.
Multiply this change of perception by all tech everywhere. I like to think of living in a bizarro reality like this one as the misanthropic principle.