One thing I've really internalized since IBM Watson is that the first reports of any breakthrough will always be the most skeevy. This is because to be amplified it can be either true or exaggerated, and exaggeration is easier. That is to say, if you model the process as a slowly increasing "merit term" plus a random "error term", the first samples that cross a threshold will always have unusually high errors.
For this reason, hype-driven/novelty-driven sites like HN usually overestimate initial developments, because they overestimate the merit term, and then underestimate later developments - because they now overestimate the error term from their earlier experience.
Deep learning systems have exceeded the hype. In 2016 we saw potential with models like AlphaGo Zero but no one could foresee the capability of LLMs (a type of deep learning model).
Not saying it is that egregious, but it's a slippery slope from "well, it didn't do all these different things out of the box, unsupervised".