Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Unfortunately, this is likely going to be a problem till strong AI becomes a thing, since there's no real way to judge content quality automatically without it.

While this is true in absolute terms, there are ways to improve that I don't really know of Google doing (not that they aren't, as I don't have information on their internal workings). Google's algorithms seem to consider linking supreme and they don't appear to do any of their own research. I think a research/polling program to get users to rate some content would greatly improve search results. Other signals like the average reading level of the page's text could also be used to surmise the type and quality of content.

Overall, we need more humanity in the search results and less blind belief in the academic theory. This is a lapse in judgment frequently made by academics and mathematicians; they refuse to accept the evidence of failure that lies in front of them (since their theory/model/whatever doesn't say it should exist) until the failure reaches catastrophic, impossible-to-ignore thresholds. We need more humanity in a lot of systems, in my estimation.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: