Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Without having tried it, what I keep getting surprised with is how apparently widely different architectures (and in other cases training data) lead to very similar outcomes. I'd expect results to vary a lot more.


I would expect a lot of attempts to fail and those tend to not be published, or gather less attention. So if we have reached a local optimum, any technique that gets close to the current benchmarks is worth publishing, as soon as results reach that point. All the one that are too distant are discarded. In the end all the paper you see are close to the current status quo.

It's possible that some of those new architecture / optimization would allow us to go beyond the current benchmark score, but probably with more training data, and money. But to get money you need to show results, which is what you see today. Scaling remains king; maybe one of these technique is 2025 "attention" paper, but even that one needed a lot of scaling to go from the 2017 version to ChatGPT.


It doesn't look like it got pushed that much unfortunately. The article says they only added 20k examples to fine tune at the end, but maybe the ceiling is much higher for diffusion?

But yeah, RWKV also ends up in a similar performance area with similar sizes - I wish someone started using it at scale finally...


But if the limiting factor is the data on which the models are trained and not the actual “computation” than this would be exactly expected right?


The data might be the limiting factor of current transformer architectures, but there's no reason to believe it's a general limiting factor of any language model (e.g. humans brains are "trained" on orders of magnitude less data and still generally perform better than any model available today)


That depends on whether these current learning models can really generalise or whether they can only interpolate within their training set




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: