Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The research that OpenAI’s doing is groundbreaking and the results are often beyond state-of-the-art. I aim to work in one of your research teams sometime!



Watch the Kool-Aid intake and you'll be just fine. Dreams are great and an absolute necessity for success but create your own. Don't buy into everything you hear, especially Elon Musk talking about Artificial General Intelligence.


Oh, I'm well aware of the hype around AGI. My personal view is that AGI is kind of an asymptotic goal, something we'll get kind of close to but never actually reach. Nevertheless, I would like to work on more pragmatic goals, like improving the current state-of-the-art language models and text generation networks. I'm actually starting by reimplementing Seq2Seq as described by Quoc Le et al.[1] for text summarization[2] (this code is extremely messy but it'll get better soon). It's been interesting to learn about word embeddings, RNNs and LSTMs, and data processing within the field of Natural Language Processing. Any tips on how to get up to speed within this field would be helpful, as I'm trying to get into research labs doing similar work at my university.

[1]: https://papers.nips.cc/paper/5346-sequence-to-sequence-learn... [2]: https://github.com/applecrazy/reportik/


AGI is not something unnatural that could never be attained. If biological systems can somehow attain it, there is no reason other kinds of man-made system cannot attain it.

The first main issue is that of compute capacity. Human brain has equivalent of at least 30 TFLOPS of computing power and this estimate is very likely 2 orders of magnitudes off.

Assume that somehow simulating 1 synapse takes only 1 transistor (gross underestimate). To simulate number of synapses in a single human brain then would need same number of transistors as in 10,000 NVidia V100 GPUs, one of the largest mass produced silicon chip!

The second main issue is of training neurons that are far more complex than our simple arithmetic adders. Back prop doesn't work for such complex neurons.

The 3rd big problem is that of training data. Human child churns through roughly 10 years of training data before reaching puberty. The man-made machine perhaps can take advantage of vast data already available but still there needs to be some structured training regiment.

So current AI efforts in relative comparison of human brain are playing with toy hardware and toy algorithms. It should be surprising that we have gone so far regardless.


>My personal view is that AGI is kind of an asymptotic goal, something we'll get kind of close to but never actually reach.

Personally, I think it is only a matter of time. Though I suspect that we will probably 'cheat' our way there first with the wetware from cultured neurons that various groups are developing, before we manage to create it in completely synthetic hardware. Also, it might just be the wetware that leads us to the required insights. This is very problematic territory however. I think we are very likely to utterly torture some of the conscious entities created along this path.


What has Musk got to do with this?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: