Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

depends on how you define "step". Engineer a 10x/100x version of what we have in terms of LLM (either by being more efficient and/or more/specialized hardware) and let this thing build novel attempts for AGI algorithms 24/7 in a evolutionary setting.

I guess the challenge is more to agree on a fitness function to measure the "AGI"-progress" against, but thats a different topic. But in general scaling up the current GenAI tech and parallelize/specialize the models in a multi-generational way _should_ be a safe ticket to AGI, but the time scale is inknown of course (since we can't even agree on the goal definition)



I like this comment because I think it highlights the exact difference between AI optimists and AI cynics.

I think you'll find that AGI cynics do not agree at all that "engineering a 10x/100x version" of what we have and making it attempt "AGI algorithms 24/7 in an evolutionary setting" is a "safe ticket" to AGI.


I wouldn’t say I’m a cynic, I’d just say how can one possibly know what a safe ticket is in this space? The logic you described is basically simple extrapolation, like in the xkcd wedding dress comic. There’s no guarantee that will get you anywhere in finite time.


"depends on how you define "step". Engineer a 10x/100x version of what we have in terms of LLM (either by being more efficient and/or more/specialized hardware) and let this thing build novel attempts for AGI algorithms 24/7 in a evolutionary setting."

The current LLM's get stuck in loops when a problem is too hard for it. They just keep doing the wrong thing over and over. It's not obvious this sort of ai can "build novel attempts" at hard problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: