There hasn't been any kind of paradigm shift but there have been a bunch of real although incremental improvements. Improved optimization algorithms, interesting and novel neural network layers.
Some of this is even motivated by mathematical theory, even if you can't prove anything in the setting of large, complex models on real-world data.
The quote from Hinton is something like, neural networks needed a 1000x improvement from the 90s, and the hardware got 100x better while the algorithms and models got 10x better.
So I guess this kind of leads to my original point. If you're a school, nothing has really changed that would require you to invest gobs of money to teach AI. It only matters if your idea of research is trying to create something you can go to market with.
Some of this is even motivated by mathematical theory, even if you can't prove anything in the setting of large, complex models on real-world data.
The quote from Hinton is something like, neural networks needed a 1000x improvement from the 90s, and the hardware got 100x better while the algorithms and models got 10x better.