Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The only thing I can come up with is that compressing several hundred million years of natural selection of animal nervous systems into another form, but optimised by gradient descent instead, just takes a lot of time.

Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.

And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.



I don't disagree, but I think the evolution argument is a red herring. We didn't have to re-engineer horses from the ground up along evolutionary lines to get to much faster and more capable cars.


The evolution thing is kind of a red herring in that we probably don't have to artificially construct the process of evolution, though your reasoning isn't a good explanation for why the "evolution" reason is a red herring: Yeah, nature already established incomprehensibly complex organic systems in these life forms -- so we're benefiting from that. But the extent of our contribution is making some select animals mate with others. Hardly comparable to building our own replacement for some millennia of organic iteration/evolution. Luckily we probably don't actually need to do that to produce AGI.


Most arguments and discussions around AGI talk past each other about the definitions of what is wanted or expected, mostly because sentience, intelligence, consciousness are all unagreed upon definitions and therefore are undefined goals to build against.

Some people do expect AGI to be a faster horse; to be the next evolution of human intelligence that's similar to us in most respects but still "better" in some aspects. Others expect AGI to be the leap from horses to cars; the means to an end, a vehicle that takes us to new places faster, and in that case it doesn't need to resemble how we got to human intelligence at all.


True, but I think this reasoning is a category error: we were and are capable of rationally designing cars. We are not today doing the same thing with AI, we’re forced to optimize them instead. Yes, the structure that you optimize around is vitally important, but we’re still doing brute force rather than intelligent design at the end of the day. It’s not comparing like with like.


Even this is a weak idea. There's nothing that restricts the term 'AGI' to a replication of animal intelligence or consciousness.


> correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute

Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.

Lab grown diamonds are a thing.


Who says that we don’t? The point is that the bounds on the question are completely unknown, and we operate on the assumption that the compute time is relatively short. Do we have any empirical basis for this? I think we do not.


The overwhelming majority of animal species never developed (what we would consider) language processing capabilities. So agi doesn't seem like something that evolution is particularly good at producing; more an emergent trait, eventually appearing in things designed simply to not die for long enough to reproduce...


Define "animal species", if you mean vertebrates, you might be surprised by the modern ethological literature. If you mean to exclude non-vertebrates ... you might be surprised by the ethological literature too.

If you just mean majority of spp, you'd be correct, simply because most are single celled. Though debate is possible when we talk about forms of chemical signalling.


Yeah, it's tricky to talk about in the span of a comment. I work on Things Involving Animals - animals provide an excellent counter-current to discussion around AGI, in numerous ways.

One interesting parallel was the gradual redefinition of language over the course of the 20th century to exclude animals as their capabilities became more obvious. So, when I say 'language processing capacities', I mean it roughly in the sense of Chomsky-era definitions, after the goal posts had been thoroughly moved away from much more inclusive definitions.

Likewise, we've been steadily moving the bar on what counts as 'intelligence', both for animals and machines. Over the last couple decades the study of animal intelligence has been more inclusive, IMO, and recognize intelligence as capabilities within the specific sensorium and survival context of the particular species. Our study of artificial intelligence are still very crude by comparison, and are still in the 'move the goalposts so that humans stay special' stage of development...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: