Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think putting AlphaFold here was premature; it might not age well. AlphaFold is an impressive achievement but it simply has not "cracked the code for protein folding" - about 1/3rd of its predictions are too uncertain to be usable, it says nothing about dynamics, suffers from the same ML problems of failing on uncommon structures, and I was surprised to learn that many of its predictions are incorrect because it ignores topological constraints[1]. To be clear, these are constructive criticisms of AlphaFold in isolation, my grumpiness is directed at the Nobel committee. "Cracked the code for protein folding" is simply not true; it is an ML approach with high accuracy that suffers the same ML limitations of failing to generalize or failing to understand deeper principles like R^3 topology that cannot be gleaned stochastically.

More significantly: it has yet to be especially impactful in biochemistry research, nor has its results really been carefully audited. Maybe it will turn out to deserve the prize. But the committee needed to wait. I am concerned that they got spun by Google's PR campaign - or, considering yesterday's prize, Big Tech PR in general.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10672856/



I think looking back five years from now, this will be viewed as another Kissinger/Obama but wrt STEM. Given far too prematurely under pressure to keep up with the Joneses/chase the hype.


I am not so confident or dismissive: the real problem is that testing millions of predictions (or any fairly bold scientific development like AlphaFols) takes time, and that time simply has not elapsed. Some of the criticisms I identified might be low-hanging fruit that in 5 years will be seen as minor corrections - but we're still discovering the things that need to be corrected. It is concerning that the prize announcement itself is either grossly overstated:

  With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified [the word 'virtually' is stretched into meaninglessness]
or vague, could have been done with other tools, and hardly Nobel-worthy:

  Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.
I am seriously wondering if they took Google / DeepMind press releases at face value.


Chew on this a little, I stripped out as much as possible, but I imagine it still will feel reflexively easy to dismiss. Partially because its hard to hear criticism, at least for me. Partially because a lot was stripped out: a lot has gone sideways to get us to this point, so this may sound minor.

The fact you have to reach for "I [wonder if the votes were based on] Google / DeepMind press releases [taken] at face value." should be a red blaring alarm.

It creates a new premise[1] that enables continued permission to seek confirmation bias.

I was once told you should check your premises when facing an unexpected conclusion, and to do that before creating new ones. I strive to.

[1] All Nobel Prize voters choose their support based on reading a press release at face value


I have the same views as you (although admittedly the Kissinger comparison didn't convey that, because we all know how that turned out). It's at best quite premature. At worst, should never have been given in hindsight. Will probably land somewhere in between.

Second point is spot on. I really, really hope they didn't just fall for what is frankly a bit of SV style press release meant to hype things. Similar work was done on crystal structures with some massive number reported. It's a vastly other thing than the implied meaning that they are now fully understood and able to be used in some way.


Original intent of the prizes is, however, to reward those that recently contributed good. Not to be a lifetime award after seeing how things pan out.


Yes, but the good has to be extraordinary. If there's logic to it, in these cases they are predicting the good will come much later. Which is an incredibly difficult prediction to make.


I would say alphafold is to structure prediction as crispr is to gene editing.

Crispr did not solve gene editing either, but has been made accessible to the broad biochemistry and biology researchers to use.

Both similar impact and changed the field significantly.


Entire fields are based upon the existence of crispr now, it demonstrated its impact. It has been 2? 3? years, people who were making papers anyway have implemented AlphaFold, it hasn't exactly spawned a new area.


It's this also a problem with Alphafold2 or just the original Alphafold ?


The topology issue was tested with AlphaFold 2.3.2, published last year. I am not sure about AlphaFold 3.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: