Could you explain why you think that? I'm looking at the lottery ticket section and it seems like he doesn't disown it; the reason he gives, via Abhinav, for not pursuing it at his commercial job is just that that kind of sparsity is not hardware friendly (except with Cerebras). "It doesn't provide a speedup for normal commercial workloads on normal commercial GPUs and that's why I'm not following it up at my commercial job and don't want to talk about it" seems pretty far from "disowning the lottery ticket hypothesis [as wrong or false]".
I think that was pretty clear even when this paper came out - even if you could find these sub networks they wouldn’t be faster on real hardware. Never thought much of this paper, but it sure did get a lot of people excited.
It is real in that it exists. It is not real in the sense that almost nobody has access to them. Unless you work at one of the handful of organizations with their hardware, it’s not a practical reality.
They have a strange business model. Their chips are massive. So they necessarily only sell them to large customers. Also because of the way they’re built (entire wafer is a single chip) no two chips will be the same. Normally imperfections in the manufacturing result in some parts of the wafer being rejected and other binned as fast or slow chips. If you use the whole wafer you get what you get. So it’s necessarily a strange platform to work with - every device is slightly different.
cool beans, thanks for this -- I think it's easier to hear it directly from the authors. I was hesitant to start researchposting and come off like a dick.
also; note to self: If I publish and disown my papers, shawn will interview me :)
What evidence against it do you have in mind? I think it's a result of little practical relevance without a way to identify winning tickets that doesn't require buying lots of tickets until you hit the jackpot (i.e. training a large, dense model to completion) but that doesn't make the observation itself incorrect.
The observation itself is also partially incorrect. This is a video I watched a few months ago that went further into the whole how do you deal with subnetworks thing.
At the timestamp they discuss how actually the original ICLR results only worked on these extremely tiny models and larger ones didn't work. The adaptation you need to sort of fix it is to train densely first for a few epochs, only then you can start increasing sparsity.
Ioannu is saying the paper's idea for training a dense network doesn't work in non-toy networks (the paper's method for selecting promising weights early doesn't improve the network)
BUT the term "lottery ticket" refers to the true observation that a small subset of weights drive functionality (see all pruning papers). It's great terminology because they truly are coincidences based on random numbers.
All that's been disproven is that paper's specific method to create a dense network based on this observation
The article makes no compelling points to me as an avid user of these applications.
I would rather shove ice picks covered in lemon juice than provide Java or Ellison anymore room in the digital ecosystem. And I’m not talking politics here wrt Ellison, just awful
Someone else on the page commented about Oracle. Why are there still people hung up on Oracle or Ellison when if anything, they've helped Java to thrive more.
The real threat has been and continues to be ... Google. They pulled a Microsoft move (that they got busted for) and Google got away with it. Google killed Eclipse as the IDE for Android development and threw that business over to their Russian buddies at JetBrains.
I'm a progressive -- just as I am not dumping my climate friendlier Tesla at a loss because Musk is a Nazi buffoon, there is no way I am walking away from my GraalVM compiled babashka binary because another billionaire turd kicked Stephen Colbert off the tonight show. I can mourn and label both as petulant and stupid, without having to bleed my back like Saint Thomas More.
Unironically, they are indeed somewhat safer -- however if people are willing to accept a substitute good of AI-based fortune telling... which I have seen lately ...
No but it provides a framework to begin thinking about ways we can protect the vulnerable from these contemptible but totally predictable bad actors.
For example, families forced to publicly beg for money to provide their sick children with treatment. What societal structures enable this situation to occur? Who is profiting off of this structure?
I appreciate you doing this and sharing it. I had a similar experience with rust and tokenization library (BERTScore) and realized it was better to let the barely worse method stand because the effort was not worth it to maintain long term
I love typst and I tried to use it recently when shipping publications to EMNLP, AAAI, and NeurIPS. While there were a lot of upsides to it, things got very bad when the teams grew beyond just a few people. Typst is incredible for single-person or a trio of people, but the web experience is not there yet for collaboration. I’m really hoping for typst to continue and I plan to use it whenever I can for smaller projects or stuff that wont involve working with professors or students who are not interested in learning new things during publication time.
reply