Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We don't know whether AGI is possible or even exactly what it is. However, if a form of intelligence where adding more hardware adds more capabilities in the fashion of present computing but where the capacities are robust and general purpose like humans rather than fragile and specialized like current software, then we'd have of amazing power - brilliant people can amazing things. A device that's akin to an army of well-organized brilliant people in a box clearly would many capacities. So it's reasonable to say that if that's possible, investing in it may have a huge payoff. (Edit: the "strong" "AGI is possible" would be that AGI is an algorithm that gives a computer human-like generality and robustness while having ordinary soft-like-abilities. There are other ideas of AGI, of course - say, a scheme that would simulate a person on such a high level that the simulated person had no access to the qualities of the software doing the simulation but that's different).

The problem, however, I think another gp's objection. OpenAI isn't really working on AGI, it's making incremental improvements on tech that's still fragile and specialized (maybe even more specialized and fragile), where the only advance of neural nets is that now they can be brute-force programmed.



> However, if a form of intelligence where adding more hardware adds more capabilities in the fashion of present computing but where the capacities are robust and general purpose like humans rather than fragile and specialized like current software, then we'd have of amazing power - brilliant people can amazing things.

That's a very big if... Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together... Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability? more memory? what would the use of this be? all things we can't really oversee), it raises the question whether this would ever be possible in a cost-efficient way (human intelligence seems like it is, in a certain way, "cheap").


>That's a very big if...

Oh, this is indeed a big if. A large, looming aspect of the problem is we don't anything like an exact characterization of "general intelligence" so what we're aiming for is very uncertain. But that uncertainty cuts multiple ways. Perhaps it would take 100K human-years to construct "it" and perhaps just a few key insights could construct "it".

> Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together...

The nature of a problem generally determines the sort of human-organization one needs to solve a problem. Large engineering problems are often solved by large teams, challenging math problems are generally solved by individuals, working with published results of other individuals. Given we're not certain of the nature of this problem, it's hard to be absolute here. Still, one could be after a few insights. If it's a huge engineering problem, you may have the problem "building an AGI is AGI-complete".

> Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability?

I've heard these "we'll get to human-level but it won't be that impressive" kinds of arguments and I find them underwhelming.

"What use would more memory be to an AGI that's 'just' at human level?"

How's this? Studying a hard problem? Fork your brain 100 times, with small variations and different viewpoints, to look at different possibilities, then combine the best solutions. Seems powerful to me. But that's just the most simplistic approach and it seems like an AGI with extra-memory could jump between the unity of an individual and the multiple views of work groups and such is multiple creative ways. The plus humans have a few quantifiable limits - human attention has been very roughly defined as being limited to "seven plus or minus two chunks". Something human-like but able to consider a few more chunks could possibly accomplish incredible things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: