Maybe the real game changer in the future will be the ability to train the same model on very different kind of inputs like video, images, text, audio... Imagine also all these data cleaning tasks are already automated, you just need to feed the model PDFs and automatically a support model will extract all the relevant metadata... or probably you'll just be able to select a set of books from an online library and your model will train on them as well (of course for a non trivial subscription lol)
Perhaps it will be trained on whole videos, or a combination of different inputs from agents that move about in the real world / or a video game.