Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wouldn't the fact that another group researched ABCA1 validate that the assistant did find a reasonable topic to research?

Ultimately we want effective treatments but the goal of the assistant isn't to perfectly predict solutions. Rather it's to reduce the overall cost and time to a solution through automation.



Not if (a) it misses a line of research has been refuted 1-2 years ago, (b) the experiments at recommends (RNA-Seq) are a limited resource that requires a whole lab to be setup to efficiently act based upon it, and (c) the result of the work is genetic upregulation of a gene, which could mean just about anything.

Genetic regulation can at best let us know _involvement_ of a gene, but nothing about why. Some examples of why a gene might be involved: it's a compensation mechanism (good!), it modulates the timing of the actual critical processes (discovery worthy but treatment path neutral), it is causative of a disease (treatment potential found) etc...

We don't need pipelines for faster scientific thinking ... especially if the result is experts will have to re-validate each finding. Most experts are anyway truly limited by access to models or access to materials. I certainly don't have a shortage of "good" ideas, and no machine will convince me they're wrong without doing the actual experiments. ;)


This is, I think, what I've been struggling to get across to people: while some domains have problems that you can test entirely in code, there are a lot more where the bottleneck is too resource-conatrained in the physical world to have an experiment-free researcher have any value.

There's practically negative utility for detecting archeological sites in South America, for example: we already know about far more than we could hope to excavate. The ideas aren't the bottleneck.

There's always been an element of this in AI: RL is amazing if you have some way to get ground truth for your problem, and a giant headache if you don't. And so on. But I seem to have trouble convincing people that sometimes the digital is insufficient.


This is a great framing - would you please expound on it a bit. Software is almost exclusively gated by the "thinking" step, except for very large language models, so it would be helpful to understand the gates ("access to models or access to materials") in more detail.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: