Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

it's so over, pretraining is ngmi. maybe sam Altman was wrong after all ? https://www.lycee.ai/blog/why-sam-altman-is-wrong


>"I also agree with researchers like Yann LeCun or François Chollet that deep learning doesn't allow models to generalize properly to out-of-distribution data—and that is precisely what we need to build artificial general intelligence."

I think "generalize properly to out-of-distribution data" is too weak of criteria for general intelligence (GI). GI model should be able to get interested about some particular area, research all the known facts, derive new knowledge / create theories based upon said fact. If there is not enough of those to be conclusive: propose and conduct experiments and use the results to prove / disprove / improve theories. And it should be doing this constantly in real time on bazillion of "ideas". Basically model our whole society. Fat chance of anything like this happening in foreseeable future.


most humans are generally intelligent but can't do what you just said AGI should do...


Excluding the realtime-iness, humans do at least possess the capacity to do so.

Besides, humans are capable of rigorous logic (which I believe is the most crucial aspect of intelligence) which I don’t think an agent without a proof system can do.


yes the problem is that there is no consensus about what AGI should be: https://medium.com/@fsndzomga/there-will-be-no-agi-d9be9af44...


Uh, if we do finally invent AGI (I am quite skeptical, LLMs feel like the chatbots of old. Invented to solve an issue, never really solving that issue, just the symptoms, and also the issues were never really understood to begin with), it will be able to do all of the above, at the same time, far better than humans ever could.

Current LLMs are a waste and quite a bit of a step back compared to older Machine Learning models IMO. I wouldn't necessarily have a huge beef with them if billions of dollars weren't being used to shove them down our throats.

LLMs actually do have usefulness, but none of the pitched stuff really does them justice.

Example: Imagine knowing you had the cure for Cancer, but instead discovered you can make way more money by declaring it to solve all of humanity, then imagine you shoved that part down everyones' throats and ignored the cancer cure part...


AI skeptics have predicted 10 of the last 0 bursts of the AI bubble. any day now...


Out of curiosity, what timeframe are you talking about? The recent LLM explosion, or the decades long AI research?

I consider myself an AI skeptic and as soon as the hype train went full steam, I assumed a crash/bubble burst was inevitable. Still do.

With the rare exception, I don’t know of anyone who has expected the bubble to burst so quickly (within two years). 10 times in the last 2 years would be every two and a half months — maybe I’m blinded by my own bias but I don’t see anyone calling out that many dates


Yes, the bubble will burst, just like the dotcom bubble burst 25 years ago.

But that didn't mean the internet should be ignored, and the same holds true for AI today IMO


I agree LLMs should not be ignored, but there is a planetary sized chasm between being ignored and the attention they currently get.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: