By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning.
I suppose normalization kernels have reductions in them, but how hard are reductions in 2025?